{"id":191755,"date":"2024-06-25T12:24:57","date_gmt":"2024-06-25T17:24:57","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2024\/06\/neuronal-representation-of-visual-working-memory-content-in-the-primate-primary-visual-cortex"},"modified":"2024-06-25T12:24:57","modified_gmt":"2024-06-25T17:24:57","slug":"neuronal-representation-of-visual-working-memory-content-in-the-primate-primary-visual-cortex","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2024\/06\/neuronal-representation-of-visual-working-memory-content-in-the-primate-primary-visual-cortex","title":{"rendered":"Neuronal representation of visual working memory content in the primate primary visual cortex"},"content":{"rendered":"<p style=\"padding-right: 20px\"><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/neuronal-representation-of-visual-working-memory-content-in-the-primate-primary-visual-cortex3.jpg\"><\/a><\/p>\n<p>To capture a broader understanding of memory encoding, we expanded our experiments to include two other stimulus types: colors and face pictures (see Materials and Methods). Both monkeys demonstrated high accuracy in memorizing grating orientations in the \u201corientation DMTS\u201d task, colors in the \u201ccolor DMTS\u201d task, and face pictures in the \u201cface DMTS\u201d task [DP: ~94% and DQ: ~87% versus 50%, all <i>P<\/i> &lt; 0.01 (one-sample <i>t<\/i> test)] (fig. S1), indicating that they had been well trained.<\/p>\n<p>We implanted a Utah array in each monkey\u2019s V1 area (see Materials and Methods; <a href=\"https:\/\/www.science.org\/doi\/10.1126\/sciadv.adk3953#F1\" class=\"\">Fig. 1B<\/a>) and presented the stimuli onto the receptive field (RF) centers of the recorded neurons (fig. S2, A and D). This enabled simultaneous monitoring of neuronal activity in our experiments. Our analyses focused primarily on neuronal activity before probe stimulus onset.<\/p>\n<p>Representative neuronal responses for two of the VWM content conditions in the orientation DMTS task at a selected electrode are shown in <a href=\"https:\/\/www.science.org\/doi\/10.1126\/sciadv.adk3953#F1\" class=\"\">Fig. 1C<\/a>. During the stimulus period (0 to 200 ms after cue onset), neurons displayed distinct firing patterns between the two content conditions (90\u00b0 or 180\u00b0 orientation). An off-response emerged following the cue offset, and activity gradually diminished. During the delay period, defined as 700 to 1,700 ms after cue onset (the thick gray line in <a href=\"https:\/\/www.science.org\/doi\/10.1126\/sciadv.adk3953#F1\" class=\"\">Fig. 1C<\/a>), neurons also exhibited a significant difference in firing rate between the two content conditions (<i>N<\/i> = 1,810 trials for 90\u00b0; <i>N<\/i> = 1,865 trials for 180\u00b0; all marked positions <i>P<\/i> &lt; 0.01) without any behavioral performance bias (<i>N<\/i> = 16 sessions, <i>P<\/i> = 0.94; right panel in <a href=\"https:\/\/www.science.org\/doi\/10.1126\/sciadv.adk3953#F1\" class=\"\">Fig. 1C<\/a>). The difference in response between these two content conditions during the delay period at the same electrode was less prominent in incorrect-response trials and in the fixation task (<a href=\"https:\/\/www.science.org\/doi\/10.1126\/sciadv.adk3953#F1\" class=\"\">Fig. 1D<\/a>).<\/p>\n","protected":false},"excerpt":{"rendered":"<p>To capture a broader understanding of memory encoding, we expanded our experiments to include two other stimulus types: colors and face pictures (see Materials and Methods). Both monkeys demonstrated high accuracy in memorizing grating orientations in the \u201corientation DMTS\u201d task, colors in the \u201ccolor DMTS\u201d task, and face pictures in the \u201cface DMTS\u201d task [DP: [\u2026]<\/p>\n","protected":false},"author":661,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1635,47],"tags":[],"class_list":["post-191755","post","type-post","status-publish","format-standard","hentry","category-materials","category-neuroscience"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/191755","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/661"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=191755"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/191755\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=191755"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=191755"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=191755"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}