{"id":17902,"date":"2015-09-28T11:15:08","date_gmt":"2015-09-28T18:15:08","guid":{"rendered":"http:\/\/lifeboat.com\/blog\/?p=17902"},"modified":"2015-09-28T11:15:08","modified_gmt":"2015-09-28T18:15:08","slug":"artificial-intelligence-must-answer-to-its-creators","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2015\/09\/artificial-intelligence-must-answer-to-its-creators","title":{"rendered":"Artificial Intelligence Must Answer to Its Creators"},"content":{"rendered":"<p>Although it was made in 1968, to many people, the renegade HAL 9000 computer in the film 2001: A Space Odyssey still represents the potential danger of real-life artificial intelligence. However, according to Mathematician, Computer Visionary and Author <a href=\"http:\/\/users.dickinson.edu\/~jmac\/\">Dr. John MacCormick<\/a>, the scenario of computers run amok depicted in the film \u2013 and in just about every other genre of science fiction \u2013 will never happen.<\/p>\n<p>\u201cRight from the start of computing, people realized these things were not just going to be crunching numbers, but could solve other types of problems,\u201d MacCormick said during a <a href=\"http:\/\/techemergence.com\/episode-86-basic-building-blocks-of-artificial-intelligence-with-dr-john-maccormack\/\">recent interview with TechEmergence<\/a>. \u201cThey quickly discovered computers couldn\u2019t do things as easily as they thought.\u201d<\/p>\n<p>While MacCormick is quick to acknowledge modern advances in artificial intelligence, he\u2019s also very conscious of its ongoing limitations, specifically replicating human vision. \u201cThe sub-field where we try to emulate the human visual system turned out to be one of the toughest nuts to crack in the whole field of AI,\u201d he said. \u201cObject recognition systems today are phenomenally good compared to what they were 20 years ago, but they\u2019re still far, far inferior to the capabilities of a human.\u201d<\/p>\n<p>To compensate for its limitations, MacCormick notes that other technologies have been developed that, while they\u2019re considered by many to be artificially intelligent, don\u2019t rely on AI. As an example, he pointed to <a href=\"http:\/\/www.technologyreview.com\/news\/530276\/hidden-obstacles-for-googles-self-driving-cars\/\">Google\u2019s self-driving car<\/a>. \u201cIf you look at the Google self-driving car, the AI vision systems are there, but they don\u2019t rely on them,\u201d MacCormick said. \u201cIn terms of recognizing lane markings on the road or obstructions, they\u2019re going to rely on other sensors that are more reliable, such as GPS, to get an exact location.\u201d<\/p>\n<p>Although it may not specifically rely on AI, MacCormick still believes that with new and improved algorithms emerging all the time, self-driving cars will eventually become a very real part of our daily fabric. And the incremental gains being achieved to make real AI systems won\u2019t be limited to just self-driving cars. \u201cOne of the areas where we\u2019re seeing pretty consistent improvement is <a href=\"http:\/\/news.stanford.edu\/news\/2014\/october\/translate-human-machine-10-29-14.html\">translation of human languages<\/a>,\u201d he said. \u201cI believe we\u2019re going to continue to see high quality translations between human languages emerging. I\u2019m not going to give a number in years, but I think it\u2019s doable in the middle term.\u201d<\/p>\n<p>Ultimately, the uses and applications of artificial intelligence will still remain in the hands of their creators, according to MacCormick. \u201cI\u2019m an unapologetic optimist. I don\u2019t think AIs are going to get out of control of humans and start doing things on their own,\u201d he said. \u201cAs we get closer to systems that rival humans, they will still be systems that we have designed and are capable of controlling.\u201d<\/p>\n<p>That optimistic outlook would seemingly put MacCormick at odds with the views of the potential dangers of AI that have been voiced recently by the likes of Elon Musk, Stephen Hawking and Bill Gates. However, MacCormick says he agrees with their point that the ethical ramifications of artificial intelligence should be considered and guidance protocols developed.<\/p>\n<p>\u201cEveryone needs to be thinking about it and cooperating to be sure that we\u2019re moving in the right direction,\u201d MacCormick said. \u201cAt some point, all sorts of people need to be thinking about this, from philosophers and social scientists to technologists and computer scientists.\u201d<\/p>\n<p>MacCormick didn\u2019t mince words when he cited the area of AI research where those protocols are most needed. The most obvious sub-field where protocols need to be in place, according to MacCormick, is military robotics. \u201cAs we become capable of building systems that are somewhat autonomous and can be used for lethal force in military conflicts, then the entire ethics of what should and should not be done really changes,\u201d he said. \u201cWe need to be thinking about this and try to formulate the correct way of using autonomous systems.\u201d<\/p>\n<p>In the end, MacCormick\u2019s optimistic view of the future, and the positive potentials of artificial intelligence, beams through clouds of uncertainty. \u201cI like to take the optimistic view that we\u2019ll be able to continue building these things and making them into useful tools that aren\u2019t the same as humans, but have extraordinary capabilities,\u201d MacCormick said. \u201cAnd we can guide them and control them and use them for positive benefit.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Although it was made in 1968, to many people, the renegade HAL 9000 computer in the film 2001: A Space Odyssey still represents the potential danger of real-life artificial intelligence. However, according to Mathematician, Computer Visionary and Author Dr. John MacCormick, the scenario of computers run amok depicted in the film \u2013 and in just [\u2026]<\/p>\n","protected":false},"author":274,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[1502,1523,1521,12],"tags":[2197,2199,2196,2198,2195,2194],"class_list":["post-17902","post","type-post","status-publish","format-standard","hentry","category-big-data","category-computing","category-driverless-cars","category-existential-risks","tag-ai-challenges","tag-autonomous-systems","tag-big-data-algorithms","tag-controlled-ai","tag-evolution-of-ai","tag-john-maccormick"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/17902","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/274"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=17902"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/17902\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=17902"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=17902"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=17902"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}