{"id":185389,"date":"2024-03-19T06:24:40","date_gmt":"2024-03-19T11:24:40","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2024\/03\/comparing-feedforward-and-recurrent-neural-network-architectures-with-human-behavior-in-artificial-grammar-learning"},"modified":"2024-03-19T06:24:40","modified_gmt":"2024-03-19T11:24:40","slug":"comparing-feedforward-and-recurrent-neural-network-architectures-with-human-behavior-in-artificial-grammar-learning","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2024\/03\/comparing-feedforward-and-recurrent-neural-network-architectures-with-human-behavior-in-artificial-grammar-learning","title":{"rendered":"Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/comparing-feedforward-and-recurrent-neural-network-architectures-with-human-behavior-in-artificial-grammar-learning2.jpg\"><\/a><\/p>\n<p><i>Scientific Reports<\/i> \u2013a crucial aspect of <i>language<\/i> acquisition. Prior experimental studies proved that artificial <i>grammar<\/i>s can be learnt by human subjects after little exposure and often without <i>explicit<\/i> knowledge of the underlying rules. We tested four <i>grammar<\/i>s with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both architectures can \u201clearn\u201d (via error back-propagation) the <i>grammar<\/i>s after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the <i>grammar<\/i> complexity level. Moreover, similar to visual processing, in which feedforward and recurrent architectures have been related to unconscious and conscious processes, the difference in performance between architectures over ten <i>regular<\/i> <i>grammar<\/i>s shows that simpler and more <i>explicit<\/i> <i>grammar<\/i>s are better learnt by recurrent architectures, supporting the hypothesis that <i>explicit<\/i> learning is best modeled by recurrent networks, whereas feedforward networks supposedly capture the dynamics involved in <i>implicit<\/i> learning.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Scientific Reports \u2013a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both [\u2026]<\/p>\n","protected":false},"author":661,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-185389","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/185389","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/661"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=185389"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/185389\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=185389"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=185389"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=185389"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}