The key to this development is an AI-powered streaming method. By decoding brain signals directly from the motor cortex – the brain’s speech control center – the AI synthesizes audible speech almost instantly.
“Our streaming approach brings the same rapid speech decoding capacity of devices like Alexa and Siri to neuroprostheses,” said Gopala Anumanchipalli, co-principal investigator of the study.
Anumanchipalli added, “Using a similar type of algorithm, we found that we could decode neural data and, for the first time, enable near-synchronous voice streaming. The result is more naturalistic, fluent speech synthesis.”