{"id":12344,"date":"2014-09-19T09:00:30","date_gmt":"2014-09-19T16:00:30","guid":{"rendered":"http:\/\/lifeboat.com\/blog\/?p=12344"},"modified":"2017-06-04T12:05:12","modified_gmt":"2017-06-04T19:05:12","slug":"is-artificial-intelligence-a-threat","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2014\/09\/is-artificial-intelligence-a-threat","title":{"rendered":"Is Artificial Intelligence a Threat?"},"content":{"rendered":"<p>By Angela Chen \u2014 The Chronicle of Higher Education<\/p>\n<p><\/p>\n<p><span class=\"dropcap\">W<\/span>hen the world ends, it may not be by fire or ice or an evil robot overlord. Our demise may come at the hands of a superintelligence that just wants more paper clips.<\/p>\n<p>So says Nick Bostrom, a philosopher who founded and directs the <a href=\"http:\/\/www.fhi.ox.ac.uk\/\">Future of Humanity Institute<\/a>, in the Oxford Martin School at the University of Oxford. He created the \u201cpaper-clip maximizer\u201d thought experiment to expose flaws in how we conceive of superintelligence. We anthropomorphize such machines as particularly clever math nerds, says Bostrom, whose book <em>Superintelligence: Paths, Dangers, Strategies<\/em> was released in Britain in July and arrived stateside this month. Spurred by science fiction and pop culture, we assume that the main superintelligence-gone-wrong scenario features a hostile organization programming software to conquer the world. But those assumptions fundamentally misunderstand the nature of superintelligence: The dangers come not necessarily from evil motives, says Bostrom, but from a powerful, wholly nonhuman agent that lacks common sense.<\/p>\n<p><a href=\"http:\/\/chronicle.com\/article\/Is-Artificial-Intelligence-a\/148763\/\" target=\"_blank\">Read more<\/a><\/p>\n","protected":false},"excerpt":{"rendered":"<p>By Angela Chen \u2014 The Chronicle of Higher Education When the world ends, it may not be by fire or ice or an evil robot overlord. Our demise may come at the hands of a superintelligence that just wants more paper clips. So says Nick Bostrom, a philosopher who founded and directs the Future of [\u2026]<\/p>\n","protected":false},"author":76,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[6],"tags":[],"class_list":["post-12344","post","type-post","status-publish","format-standard","hentry","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/12344","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/76"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=12344"}],"version-history":[{"count":3,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/12344\/revisions"}],"predecessor-version":[{"id":64909,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/12344\/revisions\/64909"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=12344"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=12344"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=12344"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}