{"id":25673,"date":"2016-05-13T09:40:44","date_gmt":"2016-05-13T16:40:44","guid":{"rendered":"http:\/\/lifeboat.com\/blog\/2016\/05\/new-digital-face-manipulation-means-you-cant-trust-video-anymore"},"modified":"2017-04-24T20:51:56","modified_gmt":"2017-04-25T03:51:56","slug":"new-digital-face-manipulation-means-you-cant-trust-video-anymore","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2016\/05\/new-digital-face-manipulation-means-you-cant-trust-video-anymore","title":{"rendered":"New Digital Face Manipulation Means You Can\u2019t Trust Video Anymore"},"content":{"rendered":"<p><\/p>\n<p><iframe style=\"display: block; margin: 0 auto; width: 100%; aspect-ratio: 4\/3; object-fit: contain;\" src=\"https:\/\/www.youtube.com\/embed\/ohmajJTcpNk?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope;\n   picture-in-picture\" allowfullscreen><\/iframe><\/p>\n<p>What if you could alter a video of anyone to emulate facial and mouth movements that never existed in the source video\u2014by yourself, at home, using a cheap webcam?<\/p>\n<p>Meet <a href=\"http:\/\/www.graphics.stanford.edu\/~niessner\/thies2016face.html\">Face2Face<\/a>. Using RGB input from one video and mapped pixels from a second video, manipulating someone\u2019s face\u2014including distinct facial and mouth movements\u2014has become incredibly easy. A team of researchers recently released a video showing what this looks like in real-time. While the method is still imperfect, it has major implications for future online content.<\/p>\n<p>According to the <a href=\"http:\/\/www.graphics.stanford.edu\/~niessner\/papers\/2016\/1facetoface\/thies2016face.pdf\">team\u2019s publication<\/a>: \u201cOur goal is to animate the facial expressions of the target video by a source actor and re-render the manipulated output video in a photo-realistic fashion.\u201d<\/p>\n<p><!-- Link: <a href=\"http:\/\/singularityhub.com\/2016\/05\/13\/new-digital-face-manipulation-means-you-cant-trust-video-anymore\/\">http:\/\/singularityhub.com\/2016\/05\/13\/new-digital-face-manipu...o-anymore\/<\/a> --><\/p>\n","protected":false},"excerpt":{"rendered":"<p>What if you could alter a video of anyone to emulate facial and mouth movements that never existed in the source video\u2014by yourself, at home, using a cheap webcam? Meet Face2Face. Using RGB input from one video and mapped pixels from a second video, manipulating someone\u2019s face\u2014including distinct facial and mouth movements\u2014has become incredibly easy. [\u2026]<\/p>\n","protected":false},"author":395,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20],"tags":[],"class_list":["post-25673","post","type-post","status-publish","format-standard","hentry","category-futurism"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/25673","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/395"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=25673"}],"version-history":[{"count":1,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/25673\/revisions"}],"predecessor-version":[{"id":41898,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/25673\/revisions\/41898"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=25673"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=25673"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=25673"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}