{"id":155154,"date":"2023-01-11T04:23:33","date_gmt":"2023-01-11T10:23:33","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2023\/01\/microsofts-new-ai-can-simulate-anyones-voice-with-3-seconds-of-audio"},"modified":"2023-01-11T04:23:33","modified_gmt":"2023-01-11T10:23:33","slug":"microsofts-new-ai-can-simulate-anyones-voice-with-3-seconds-of-audio","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2023\/01\/microsofts-new-ai-can-simulate-anyones-voice-with-3-seconds-of-audio","title":{"rendered":"Microsoft\u2019s new AI can simulate anyone\u2019s voice with 3 seconds of audio"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/microsofts-new-ai-can-simulate-anyones-voice-with-3-seconds-of-audio.jpg\"><\/a><\/p>\n<p>On Thursday, Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person\u2019s voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything\u2014and do it in a way that attempts to preserve the speaker\u2019s emotional tone.<\/p>\n<p>Microsoft calls VALL-E a \u201cneural codec language model,\u201d and it builds off of a technology called EnCodec, which Meta announced in October 2022. Unlike other text-to-speech methods that typically synthesize speech by manipulating waveforms, VALL-E generates discrete audio codec codes from text and acoustic prompts.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>On Thursday, Microsoft researchers announced a new text-to-speech AI model called VALL-E that can closely simulate a person\u2019s voice when given a three-second audio sample. Once it learns a specific voice, VALL-E can synthesize audio of that person saying anything\u2014and do it in a way that attempts to preserve the speaker\u2019s emotional tone. Microsoft calls [\u2026]<\/p>\n","protected":false},"author":427,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-155154","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/155154","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/427"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=155154"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/155154\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=155154"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=155154"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=155154"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}