{"id":157122,"date":"2023-02-06T03:26:29","date_gmt":"2023-02-06T09:26:29","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2023\/02\/vectors-of-cognitive-ai-attention"},"modified":"2023-02-06T03:26:29","modified_gmt":"2023-02-06T09:26:29","slug":"vectors-of-cognitive-ai-attention","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2023\/02\/vectors-of-cognitive-ai-attention","title":{"rendered":"Vectors of Cognitive AI: Attention"},"content":{"rendered":"<p><\/p>\n<p><iframe style=\"display: block; margin: 0 auto; width: 100%; aspect-ratio: 4\/3; object-fit: contain;\" src=\"https:\/\/www.youtube.com\/embed\/IncnIIpeKVM?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope;\n   picture-in-picture\" allowfullscreen><\/iframe><\/p>\n<p>Panelists: michael graziano, jonathan cohen, vasudev lal, joscha bach.<\/p>\n<p>The seminal contribution \u201cAttention is all you need\u201d (Vasvani et al. 2017), which introduced the Transformer algorithm, triggered a small revolution in machine learning. Unlike convolutional neural networks, which construct each feature out of a fixed neighborhood of signals, Transformers learn which data a feature on the next layer of a neural network should attend to. However, attention in neural networks is very different from the integrated attention in a human mind. In our minds, attention seems to be part of a top-down mechanism that actively creates a coherent, dynamic model of reality, and plays a crucial role in planning, inference, reflection and creative problem solving. Our consciousness appears to be involved in maintaining the control model of our attention.<\/p>\n<p>In this panel, we want to discuss avenues into our understanding of attention, in the context of machine learning, cognitive science and future developments of AI.<\/p>\n<p>Full program and references: <a href=\"https:\/\/cognitive-ai-panel.webflow.io\/panels\/attention\">https:\/\/cognitive-ai-panel.webflow.io\/panels\/attention<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Panelists: michael graziano, jonathan cohen, vasudev lal, joscha bach. The seminal contribution \u201cAttention is all you need\u201d (Vasvani et al. 2017), which introduced the Transformer algorithm, triggered a small revolution in machine learning. Unlike convolutional neural networks, which construct each feature out of a fixed neighborhood of signals, Transformers learn which data a feature on [\u2026]<\/p>\n","protected":false},"author":661,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[41,6],"tags":[],"class_list":["post-157122","post","type-post","status-publish","format-standard","hentry","category-information-science","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/157122","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/661"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=157122"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/157122\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=157122"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=157122"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=157122"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}