{"id":90790,"date":"2019-05-16T14:02:41","date_gmt":"2019-05-16T21:02:41","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2019\/05\/new-ai-sees-like-a-human-filling-in-the-blanks"},"modified":"2019-05-16T14:02:41","modified_gmt":"2019-05-16T21:02:41","slug":"new-ai-sees-like-a-human-filling-in-the-blanks","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2019\/05\/new-ai-sees-like-a-human-filling-in-the-blanks","title":{"rendered":"New AI sees like a human, filling in the blanks"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/new-ai-sees-like-a-human-filling-in-the-blanks2.jpg\"><\/a><\/p>\n<p>Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do \u2014 take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of dangerous missions. The team, led by professor Kristen Grauman, Ph.D. candidate Santhosh Ramakrishnan and former Ph.D. candidate Dinesh Jayaraman (now at the University of California, Berkeley) published their results today in the journal <em>Science Robotics<\/em>.<\/p>\n<p>Most AI agents \u2014 computer systems that could endow robots or other machines with intelligence \u2014 are trained for very specific tasks \u2014 such as to recognize an object or estimate its volume \u2014 in an environment they have experienced before, like a factory. But the agent developed by Grauman and Ramakrishnan is general purpose, gathering visual information that can then be used for a wide range of tasks.<\/p>\n<p>\u201cWe want an agent that\u2019s generally equipped to enter environments and be ready for new perception tasks as they arise,\u201d Grauman said. \u201cIt behaves in a way that\u2019s versatile and able to succeed at different tasks because it has learned useful patterns about the visual world.\u201d<\/p>\n<p><a href=\"https:\/\/www.sciencedaily.com\/releases\/2019\/05\/190515144017.htm\" target=\"_blank\" rel=\"noopener noreferrer\"><\/p>\n<div style=\"clear:both;\">Read more<\/div>\n<p><\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Computer scientists at The University of Texas at Austin have taught an artificial intelligence agent how to do something that usually only humans can do \u2014 take a few quick glimpses around and infer its whole environment, a skill necessary for the development of effective search-and-rescue robots that one day can improve the effectiveness of [\u2026]<\/p>\n","protected":false},"author":396,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-90790","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/90790","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/396"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=90790"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/90790\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=90790"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=90790"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=90790"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}