{"id":177174,"date":"2023-12-01T08:25:02","date_gmt":"2023-12-01T14:25:02","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2023\/12\/decoding-motor-plans-using-a-closed-loop-ultrasonic-brain-machine-interface"},"modified":"2023-12-01T08:25:02","modified_gmt":"2023-12-01T14:25:02","slug":"decoding-motor-plans-using-a-closed-loop-ultrasonic-brain-machine-interface","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2023\/12\/decoding-motor-plans-using-a-closed-loop-ultrasonic-brain-machine-interface","title":{"rendered":"Decoding motor plans using a closed-loop ultrasonic brain\u2013machine interface"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/decoding-motor-plans-using-a-closed-loop-ultrasonic-brain-machine-interface2.jpg\"><\/a><\/p>\n<p>BMIs using intracortical electrodes, such as Utah arrays, are particularly adept at sensing fast changing (millisecond-scale) neural activity from spatially localized regions (1 cm) during behavior or stimulation that is correlated to activity in such spatially specific regions, for example, M1 for motor and V1 for vision. Intracortical electrodes, however, struggle to track individual neurons over longer periods of time, for example, between subsequent recording sessions<sup>15,16<\/sup>. Consequently, decoders are typically retrained every day<sup>15<\/sup>. A similar neural population identification problem is also present with an ultrasound device, including from shifts in the field of view between experiment sessions. In the current study, we demonstrated an alignment method that stabilizes image-based BMIs across more than a month and decodes from the same neurovascular populations with minimal, if any, retraining. This is a critical development that enables easy alignment of a previous days\u2019 models to a new day\u2019s data and allows decoding to begin with minimal to no new training data. Much effort has focused on ways to recalibrate intracortical BMIs across days that do not require extensive new data<sup>18,19,20,21,22,23<\/sup>. Most of these methods require identification of manifolds and\/or latent dynamical parameters and collecting new neural and behavioral data to align to these manifolds\/parameters. These techniques are, to date, tailored to each research group\u2019s specific applications with varying requirements, such as hyperparameter tuning of the model<sup>23<\/sup> or a consistent temporal structure of data<sup>22<\/sup>. They are also susceptible to changes in function in addition to anatomy. For example, \u2018out-of-manifold\u2019 learning\/plasticity alters the manifold<sup>24<\/sup> in ways that many alignment techniques struggle to address. Finally, some of the algorithms are computationally expensive and\/or difficult to implement in online use<sup>22<\/sup>.<\/p>\n<p>Contrasting these manifold-based methods, our decoder alignment algorithm leverages the intrinsic spatial resolution and field of view provided by fUS neuroimaging to perform decoder stabilization in a way that is intuitive, repeatable and performant. We used a single fUS frame (\u223c 500 ms) to generate an image of the current session\u2019s anatomy and aligned a previous session\u2019s field of view to this single image. Notably, this did not require any additional behavior for the alignment. Because we only relied upon the anatomy, our decoder alignment is robust, can use any off-the-shelf alignment tool and is a valid technique so long as the anatomy and mesoscopic encoding of relevant variables do not change drastically between sessions.<\/p>\n<p>It remains an open question as to how much the precise positioning of the ultrasound transducer during each session matters for decoder performance, especially out-of-plane shifts or rotations. In these current experiments, we used linear decoders that assumed a given image pixel is the same brain voxel across all aligned data sessions. To minimize disruptions to this pixel\u2013voxel relationship, we performed image alignment within the 2D plane. As we could only image a 2D recording plane, we did not correct for any out-of-plane brain shifts between sessions that would have disrupted the pixel\u2013voxel mapping assumption. Future fUS-BMI decoders may benefit from three-dimensional (3D) models of the neurovasculature, such as registering the 2D field of view to a 3D volume<sup><a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Demen\u00e9, C. et al. 4D microvascular imaging based on ultrafast Doppler tomography. NeuroImage 127472&ndash;483 (2016).\" href=\"https:\/\/www.nature.com\/articles\/s41593-023-01500-7#ref-CR25\" id=\"ref-link-section-d98121598e1526\">25<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" title=\"Rabut, C. et al. 4D functional ultrasound imaging of whole-brain activity in rodents. Nat. Methods 16994&ndash;997 (2019).\" href=\"https:\/\/www.nature.com\/articles\/s41593-023-01500-7#ref-CR26\" id=\"ref-link-section-d98121598e1526_1\">26<\/a>,<a data-track=\"click\" data-track-action=\"reference anchor\" data-track-label=\"link\" data-test=\"citation-ref\" aria-label=\"Reference 27\" title=\"Brunner, C. et al. A platform for brain-wide volumetric functional ultrasound imaging and analysis of circuit dynamics in awake mice. Neuron 108861&ndash;875.e7 (2020).\" href=\"https:\/\/www.nature.com\/articles\/s41593-023-01500-7#ref-CR27\" id=\"ref-link-section-d98121598e1529\">27<\/a><\/sup> to better maintain a consistent pixel\u2013voxel mapping.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>BMIs using intracortical electrodes, such as Utah arrays, are particularly adept at sensing fast changing (millisecond-scale) neural activity from spatially localized regions (1 cm) during behavior or stimulation that is correlated to activity in such spatially specific regions, for example, M1 for motor and V1 for vision. Intracortical electrodes, however, struggle to track individual neurons [\u2026]<\/p>\n","protected":false},"author":661,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[41,1965,47],"tags":[],"class_list":["post-177174","post","type-post","status-publish","format-standard","hentry","category-information-science","category-mapping","category-neuroscience"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/177174","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/661"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=177174"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/177174\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=177174"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=177174"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=177174"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}