{"id":79380,"date":"2018-06-09T19:38:25","date_gmt":"2018-06-10T02:38:25","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2018\/06\/mit-fed-an-ai-data-from-reddit-and-now-it-thinks-of-nothing-but-murder"},"modified":"2018-06-11T11:07:26","modified_gmt":"2018-06-11T18:07:26","slug":"mit-fed-an-ai-data-from-reddit-and-now-it-thinks-of-nothing-but-murder","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2018\/06\/mit-fed-an-ai-data-from-reddit-and-now-it-thinks-of-nothing-but-murder","title":{"rendered":"MIT fed an AI data from Reddit, and now it thinks of nothing but murder"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/mit-fed-an-ai-data-from-reddit-and-now-it-thinks-of-nothing-but-murder.jpg\"><\/a><\/p>\n<p>The point of the experiment was to show how easy it is to bias any artificial intelligence if you train it on biased data. The team wisely didn\u2019t speculate about whether exposure to graphic content changes the way a human thinks. They\u2019ve done other experiments in the same vein, too, using AI to <a href=\"http:\/\/shelley.ai\/\">write horror stories<\/a>, <a href=\"http:\/\/nightmare.mit.edu\/\">create terrifying images<\/a>, <a href=\"http:\/\/moralmachine.mit.edu\/\">judge moral decisions<\/a>, and even <a href=\"https:\/\/deepempathy.mit.edu\/\">induce empathy<\/a>. This kind of research is important. We <em>should<\/em> be asking the same questions of artificial intelligence as we do of any other technology because it is far too easy for unintended consequences to hurt the people the system wasn\u2019t designed to see. Naturally, this is the basis of sci-fi: imagining possible futures and showing what could lead us there. Issac Asimov gave wrote the \u201c<a href=\"https:\/\/en.wikipedia.org\/wiki\/Three_Laws_of_Robotics\">Three Laws of Robotics<\/a>\u201d because he wanted to imagine what might happen if they were contravened.<\/p>\n<p>Even though artificial intelligence isn\u2019t a new field, we\u2019re a long, long way from producing something that, as Gideon Lewis-Kraus <a href=\"https:\/\/www.nytimes.com\/2016\/12\/14\/magazine\/the-great-ai-awakening.html\">wrote in <em>The<\/em> <em>New York Times Magazine<\/em><\/a>, can \u201cdemonstrate a facility with the implicit, the interpretive.\u201d But it still hasn\u2019t undergone the kind of reckoning that causes a discipline to grow up. Physics, you recall, gave us the atom bomb, and every person who becomes a physicist knows they might be called on to help create <a href=\"https:\/\/en.wikipedia.org\/wiki\/Franck_Report\">something that could fundamentally alter the world<\/a>. Computer scientists are beginning to realize this, too. At Google this year, <a href=\"https:\/\/jacobinmag.com\/2018\/06\/google-project-maven-military-tech-workers\">5,000 employees protested and a host of employees resigned<\/a> from the company because of its involvement with Project Maven, a Pentagon initiative that uses machine learning to improve the accuracy of drone strikes.<\/p>\n<p>Norman is just a thought experiment, but the questions it raises about machine learning algorithms making judgments and decisions based on biased data are urgent and necessary. Those systems, for example, are already used in credit underwriting, deciding whether or not loans are worth guaranteeing. What if an algorithm decides you shouldn\u2019t buy a house or a car? To whom do you appeal? What if you\u2019re not white and a <a href=\"http:\/\/nautil.us\/issue\/55\/trust\/are-algorithms-building-the-new-infrastructure-of-racism\">piece of software predicts you\u2019ll commit a crime<\/a> because of that? There are many, many open questions. Norman\u2019s role is to help us figure out their answers.<\/p>\n<p><!-- Link: <a href=\"https:\/\/www.theverge.com\/2018\/6\/7\/17437454\/mit-ai-psychopathic-reddit-data-algorithmic-bias\">https:\/\/www.theverge.com\/2018\/6\/7\/17437454\/mit-ai-psychopath...thmic-bias<\/a> --><\/p>\n","protected":false},"excerpt":{"rendered":"<p>The point of the experiment was to show how easy it is to bias any artificial intelligence if you train it on biased data. The team wisely didn\u2019t speculate about whether exposure to graphic content changes the way a human thinks. They\u2019ve done other experiments in the same vein, too, using AI to write horror [\u2026]<\/p>\n","protected":false},"author":481,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1488,30,41,9,219,6],"tags":[],"class_list":["post-79380","post","type-post","status-publish","format-standard","hentry","category-drones","category-ethics","category-information-science","category-military","category-physics","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/79380","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/481"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=79380"}],"version-history":[{"count":1,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/79380\/revisions"}],"predecessor-version":[{"id":79381,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/79380\/revisions\/79381"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=79380"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=79380"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=79380"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}