{"id":95615,"date":"2019-08-30T21:22:24","date_gmt":"2019-08-31T04:22:24","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2019\/08\/a-deep-learning-technique-for-context-aware-emotion-recognition"},"modified":"2019-08-30T21:22:24","modified_gmt":"2019-08-31T04:22:24","slug":"a-deep-learning-technique-for-context-aware-emotion-recognition","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2019\/08\/a-deep-learning-technique-for-context-aware-emotion-recognition","title":{"rendered":"A deep learning technique for context-aware emotion recognition"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/a-deep-learning-technique-for-context-aware-emotion-recognition.jpg\"><\/a><\/p>\n<p>A team of researchers at Yonsei University and \u00c9cole Polytechnique F\u00e9d\u00e9rale de Lausanne (EPFL) has recently developed a new technique that can recognize emotions by analyzing people\u2019s faces in images along with contextual features. They presented and outlined their deep learning-based architecture, called CAER-Net, in a paper pre-published on arXiv.<\/p>\n<p>For several years, researchers worldwide have been trying to develop tools for automatically detecting <a href=\"https:\/\/techxplore.com\/tags\/human+emotions\/\" rel=\"tag\" class=\"\">human emotions<\/a> by analyzing images, videos or audio clips. These tools could have numerous applications, for instance, improving robot-human interactions or helping doctors to identify signs of mental or neural disorders (e.g.\u201e based on atypical speech patterns, facial features, etc.).<\/p>\n<p>So far, the majority of techniques for recognizing emotions in images have been based on the analysis of people\u2019s facial expressions, essentially assuming that these expressions best convey humans\u2019 emotional responses. As a result, most datasets for training and evaluating emotion recognition tools (e.g., the AFEW and FER2013 datasets) only contain cropped images of human faces.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>A team of researchers at Yonsei University and \u00c9cole Polytechnique F\u00e9d\u00e9rale de Lausanne (EPFL) has recently developed a new technique that can recognize emotions by analyzing people\u2019s faces in images along with contextual features. They presented and outlined their deep learning-based architecture, called CAER-Net, in a paper pre-published on arXiv. For several years, researchers worldwide [\u2026]<\/p>\n","protected":false},"author":427,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11,6],"tags":[],"class_list":["post-95615","post","type-post","status-publish","format-standard","hentry","category-biotech-medical","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/95615","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/427"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=95615"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/95615\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=95615"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=95615"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=95615"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}