{"id":140970,"date":"2022-06-22T23:22:34","date_gmt":"2022-06-23T04:22:34","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2022\/06\/model-moves-computers-closer-to-understanding-human-conversation"},"modified":"2022-06-22T23:22:34","modified_gmt":"2022-06-23T04:22:34","slug":"model-moves-computers-closer-to-understanding-human-conversation","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2022\/06\/model-moves-computers-closer-to-understanding-human-conversation","title":{"rendered":"Model moves computers closer to understanding human conversation"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/model-moves-computers-closer-to-understanding-human-conversation2.jpg\"><\/a><\/p>\n<p>An engineer from the Johns Hopkins Center for Language and Speech Processing has developed a machine learning model that can distinguish functions of speech in transcripts of dialogs outputted by language understanding, or LU, systems in an approach that could eventually help computers \u201cunderstand\u201d spoken or written text in much the same way that humans do.<\/p>\n<p>Developed by CLSP Assistant Research Scientist Piotr Zelasko, the new model identifies the intent behind words and organizes them into categories such as \u201cStatement,\u201d \u201cQuestion,\u201d or \u201cInterruption,\u201d in the final transcript: a task called \u201cdialog act recognition.\u201d By providing other models with a more organized and segmented version of text to work with, Zelasko\u2019s model could become a first step in making sense of a conversation, he said.<\/p>\n<p>\u201cThis new method means that LU systems no longer have to deal with huge, unstructured chunks of text, which they struggle with when trying to classify things such as the topic, sentiment, or intent of the text. Instead, they can work with a series of expressions, which are saying very specific things, like a question or interruption. My model enables these systems to work where they might have otherwise failed,\u201d said Zelasko, whose study appeared recently in Transactions of the Association for Computational Linguistics.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>An engineer from the Johns Hopkins Center for Language and Speech Processing has developed a machine learning model that can distinguish functions of speech in transcripts of dialogs outputted by language understanding, or LU, systems in an approach that could eventually help computers \u201cunderstand\u201d spoken or written text in much the same way that humans [\u2026]<\/p>\n","protected":false},"author":661,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-140970","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/140970","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/661"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=140970"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/140970\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=140970"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=140970"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=140970"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}