{"id":111517,"date":"2020-08-18T07:43:43","date_gmt":"2020-08-18T14:43:43","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2020\/08\/mix-stage-a-model-that-can-generate-gestures-to-accompany-a-virtual-agents-speech"},"modified":"2020-08-18T07:43:43","modified_gmt":"2020-08-18T14:43:43","slug":"mix-stage-a-model-that-can-generate-gestures-to-accompany-a-virtual-agents-speech","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2020\/08\/mix-stage-a-model-that-can-generate-gestures-to-accompany-a-virtual-agents-speech","title":{"rendered":"Mix-StAGE: A model that can generate gestures to accompany a virtual agent\u2019s speech"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/mix-stage-a-model-that-can-generate-gestures-to-accompany-a-virtual-agents-speech.jpg\"><\/a><\/p>\n<p>Virtual assistants and robots are becoming increasingly sophisticated, interactive and human-like. To fully replicate human communication, however, artificial intelligence (AI) agents should not only be able to determine what users are saying and produce adequate responses, they should also mimic humans in the way they speak.<\/p>\n<p>Researchers at Carnegie Mellon University (CMU) have recently carried out a study aimed at improving how <a href=\"https:\/\/techxplore.com\/tags\/virtual+assistants\/\" rel=\"tag\" class=\"\">virtual assistants<\/a> and robots communicate with humans by generating <a href=\"https:\/\/techxplore.com\/tags\/natural+gestures\/\" rel=\"tag\" class=\"\">natural gestures<\/a> to accompany their speech. Their paper, <a href=\"https:\/\/arxiv.org\/abs\/2007.12553\">pre-published on arXiv<\/a> and set to be presented at the <a href=\"https:\/\/eccv2020.eu\/\">European Conference on Computer Vision (ECCV) 2020<\/a>, introduces Mix-StAGE, a new <a href=\"https:\/\/techxplore.com\/tags\/model\/\" rel=\"tag\" class=\"\">model<\/a> that can produce different styles of co-speech gestures that best match the voice of a <a href=\"https:\/\/techxplore.com\/tags\/speaker\/\" rel=\"tag\" class=\"\">speaker<\/a> and what he\/she is saying.<\/p>\n<p>\u201cImagine a situation where you are communicating with a friend in a <a href=\"https:\/\/techxplore.com\/tags\/virtual+space\/\" rel=\"tag\" class=\"\">virtual space<\/a> through a <a href=\"https:\/\/techxplore.com\/tags\/virtual+reality+headset\/\" rel=\"tag\" class=\"\">virtual reality headset<\/a>,\u201d Chaitanya Ahuja, one of the researchers who carried out the study, told TechXplore. \u201cThe headset is only able to hear your voice, but not able to see your hand gestures. The goal of our model is to predict the <a href=\"https:\/\/techxplore.com\/tags\/hand+gestures\/\" rel=\"tag\" class=\"\">hand gestures<\/a> accompanying the speech.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Virtual assistants and robots are becoming increasingly sophisticated, interactive and human-like. To fully replicate human communication, however, artificial intelligence (AI) agents should not only be able to determine what users are saying and produce adequate responses, they should also mimic humans in the way they speak. Researchers at Carnegie Mellon University (CMU) have recently carried [\u2026]<\/p>\n","protected":false},"author":427,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6,8,1879],"tags":[],"class_list":["post-111517","post","type-post","status-publish","format-standard","hentry","category-robotics-ai","category-space","category-virtual-reality"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/111517","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/427"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=111517"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/111517\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=111517"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=111517"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=111517"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}