{"id":179441,"date":"2023-12-29T11:22:29","date_gmt":"2023-12-29T17:22:29","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2023\/12\/how-do-you-make-a-robot-smarter-program-it-to-know-what-it-doesnt-know"},"modified":"2023-12-29T11:22:29","modified_gmt":"2023-12-29T17:22:29","slug":"how-do-you-make-a-robot-smarter-program-it-to-know-what-it-doesnt-know","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2023\/12\/how-do-you-make-a-robot-smarter-program-it-to-know-what-it-doesnt-know","title":{"rendered":"How do you make a robot smarter? Program it to know what it doesn\u2019t know"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/how-do-you-make-a-robot-smarter-program-it-to-know-what-it-doesnt-know2.jpg\"><\/a><\/p>\n<p>Engineers at Princeton University and Google have come up with a new way to teach robots to know when they don\u2019t know. The technique involves quantifying the fuzziness of human language and using that measurement to tell robots when to ask for further directions. Telling a robot to pick up a bowl from a table with only one bowl is fairly clear. But telling a robot to pick up a bowl when there are five bowls on the table generates a much higher degree of uncertainty \u2014 and triggers the robot to ask for clarification.<\/p>\n<p>Because tasks are typically more complex than a simple \u201cpick up a bowl\u201d command, the engineers use large language models (LLMs) \u2014 the technology behind tools such as ChatGPT \u2014 to gauge uncertainty in complex environments. LLMs are bringing robots powerful capabilities to follow human language, but LLM outputs are still frequently unreliable, said <a href=\"https:\/\/engineering.princeton.edu\/faculty\/anirudha-majumdar\">Anirudha Majumdar<\/a>, an assistant professor of <a href=\"https:\/\/mae.princeton.edu\/\">mechanical and aerospace engineering<\/a> at Princeton and the senior author of a <a href=\"https:\/\/openreview.net\/forum?id=4ZK8ODNyFXx\">study<\/a> outlining the new method.<\/p>\n<p>\u201cBlindly following plans generated by an LLM could cause robots to act in an unsafe or untrustworthy manner, and so we need our LLM-based robots to know when they don\u2019t know,\u201d said Majumdar.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Engineers at Princeton University and Google have come up with a new way to teach robots to know when they don\u2019t know. The technique involves quantifying the fuzziness of human language and using that measurement to tell robots when to ask for further directions. Telling a robot to pick up a bowl from a table [\u2026]<\/p>\n","protected":false},"author":662,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-179441","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/179441","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/662"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=179441"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/179441\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=179441"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=179441"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=179441"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}