Humans can communicate a range of nonverbal emotions, from terrified shrieks to exasperated groans. Voice inflections and cues can communicate subtle feelings, from ecstasy to agony, arousal and disgust. Even when simply speaking, the human voice is stuffed with meaning, and a lot of potential value if you’re a company collecting personal data.
Now, researchers at the Imperial College London have used AI to mask the emotional cues in users’ voices when they’re speaking to internet-connected voice assistants. The idea is to put a “layer” between the user and the cloud their data is uploaded to by automatically converting emotional speech into “normal” speech. They recently published their paper “Emotionless: Privacy-Preserving Speech Analysis for Voice Assistants” on the arXiv preprint server.
Our voices can reveal our confidence and stress levels, physical condition, age, gender, and personal traits. This isn’t lost on smart speaker makers, and companies such as Amazon are always working to improve the emotion-detecting abilities of AI.
let’s see here; instead of using our emotional A.I. detectors to find immature, evil people, we mask them and let mankind be insane and irratoinal — brilliant!