{"id":210222,"date":"2025-03-31T21:18:05","date_gmt":"2025-04-01T02:18:05","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2025\/03\/brain-to-voice-interface-converts-thoughts-to-speech-in-near-real-time"},"modified":"2025-03-31T21:18:05","modified_gmt":"2025-04-01T02:18:05","slug":"brain-to-voice-interface-converts-thoughts-to-speech-in-near-real-time","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2025\/03\/brain-to-voice-interface-converts-thoughts-to-speech-in-near-real-time","title":{"rendered":"Brain-to-voice interface converts thoughts to speech in near-real time"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/brain-to-voice-interface-converts-thoughts-to-speech-in-near-real-time2.jpg\"><\/a><\/p>\n<p>Marking a breakthrough in the field of brain-computer interfaces (BCIs), a team of researchers from UC Berkeley and UC San Francisco has unlocked a way to restore naturalistic speech for people with severe paralysis.<\/p>\n<p>This work solves the long-standing challenge of latency in speech neuroprostheses, the time lag between when a subject attempts to speak and when sound is produced. Using recent advances in artificial intelligence-based modeling, the researchers developed a streaming method that synthesizes brain signals into audible speech in near-real time.<\/p>\n<p>As <a href=\"https:\/\/www.nature.com\/articles\/s41593-025-01905-6\" target=\"_blank\">reported<\/a> in <i>Nature Neuroscience<\/i>, this technology represents a critical step toward enabling communication for people who have lost the ability to speak.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Marking a breakthrough in the field of brain-computer interfaces (BCIs), a team of researchers from UC Berkeley and UC San Francisco has unlocked a way to restore naturalistic speech for people with severe paralysis. This work solves the long-standing challenge of latency in speech neuroprostheses, the time lag between when a subject attempts to speak [\u2026]<\/p>\n","protected":false},"author":732,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1522,6],"tags":[],"class_list":["post-210222","post","type-post","status-publish","format-standard","hentry","category-innovation","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/210222","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/732"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=210222"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/210222\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=210222"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=210222"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=210222"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}