{"id":76913,"date":"2018-03-14T17:42:51","date_gmt":"2018-03-15T00:42:51","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2018\/03\/a-new-test-could-tell-us-whether-an-ai-has-common-sense"},"modified":"2018-03-14T17:42:51","modified_gmt":"2018-03-15T00:42:51","slug":"a-new-test-could-tell-us-whether-an-ai-has-common-sense","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2018\/03\/a-new-test-could-tell-us-whether-an-ai-has-common-sense","title":{"rendered":"A new test could tell us whether an AI has common sense"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/a-new-test-could-tell-us-whether-an-ai-has-common-sense.jpg\"><\/a><\/p>\n<p>Virtual assistants and chatbots don\u2019t have a lot of common sense. It\u2019s because these types of machine learning rely on specific situations they have encountered before, rather than using broader knowledge to answer a question. However, researchers at the Allen Institute for AI (Ai2) have devised a new test, the Arc Reasoning Challenge (ARC) that can test <a href=\"https:\/\/www.engadget.com\/2017\/12\/22\/artificial-intelligence-2017-2018\/\">an artificial intelligence<\/a> on its understanding of the way our world operates.<\/p>\n<p>Humans use common sense to fill in the gaps of any question they are posed, delivering answers within an understood but non-explicit context. Peter Clark, the lead researcher on ARC, explained <a href=\"https:\/\/www.technologyreview.com\/s\/610521\/ai-assistants-dont-have-the-common-sense-to-avoid-talking-gibberish\/\">in a statement<\/a>, \u201cMachines do not have this common sense, and thus only see what is explicitly written, and miss the many implications and assumptions that underlie a piece of text.\u201d<\/p>\n<p>The test asks basic multiple-choice questions that draw from general knowledge. For example, one ARC question is: \u201cWhich item below is not made from a material grown in nature?\u201d The possible answers are a cotton shirt, a wooden chair, a plastic spoon and a grass basket.<\/p>\n<p><!-- Link: <a href=\"https:\/\/www.engadget.com\/2018\/03\/14\/ai-arc-test-common-sense\/\">https:\/\/www.engadget.com\/2018\/03\/14\/ai-arc-test-common-sense\/<\/a> --><\/p>\n","protected":false},"excerpt":{"rendered":"<p>Virtual assistants and chatbots don\u2019t have a lot of common sense. It\u2019s because these types of machine learning rely on specific situations they have encountered before, rather than using broader knowledge to answer a question. However, researchers at the Allen Institute for AI (Ai2) have devised a new test, the Arc Reasoning Challenge (ARC) that [\u2026]<\/p>\n","protected":false},"author":396,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1635,6],"tags":[],"class_list":["post-76913","post","type-post","status-publish","format-standard","hentry","category-materials","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/76913","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/396"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=76913"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/76913\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=76913"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=76913"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=76913"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}