A research team led by Prof. Li Hai from the Hefei Institutes of Physical Science of the Chinese Academy of Sciences has developed a novel deep learning framework that significantly improves the accuracy and interpretability of detecting neurological disorders through speech. The findings were recently published in Neurocomputing.
“A slight change in the way we speak might be more than just a slip of the tongue—it could be a warning sign from the brain,” said Prof. Hai, who led the team. “Our new model can detect early symptoms of neurological diseases such as Parkinson’s, Huntington’s, and Wilson disease, by analyzing voice recordings.”
Dysarthria is a common early symptom of various neurological disorders. Since speech abnormalities often reflect underlying neurodegenerative processes, voice signals have emerged as promising noninvasive biomarkers for the early screening and continuous monitoring of such conditions.