Evolution of AI – Lifeboat News: The Blog https://lifeboat.com/blog Safeguarding Humanity Mon, 28 Sep 2015 18:15:08 +0000 en-US hourly 1 https://wordpress.org/?v=6.7.1 Artificial Intelligence Must Answer to Its Creators https://lifeboat.com/blog/2015/09/artificial-intelligence-must-answer-to-its-creators Mon, 28 Sep 2015 18:15:08 +0000 http://lifeboat.com/blog/?p=17902 Although it was made in 1968, to many people, the renegade HAL 9000 computer in the film 2001: A Space Odyssey still represents the potential danger of real-life artificial intelligence. However, according to Mathematician, Computer Visionary and Author Dr. John MacCormick, the scenario of computers run amok depicted in the film – and in just about every other genre of science fiction – will never happen.

“Right from the start of computing, people realized these things were not just going to be crunching numbers, but could solve other types of problems,” MacCormick said during a recent interview with TechEmergence. “They quickly discovered computers couldn’t do things as easily as they thought.”

While MacCormick is quick to acknowledge modern advances in artificial intelligence, he’s also very conscious of its ongoing limitations, specifically replicating human vision. “The sub-field where we try to emulate the human visual system turned out to be one of the toughest nuts to crack in the whole field of AI,” he said. “Object recognition systems today are phenomenally good compared to what they were 20 years ago, but they’re still far, far inferior to the capabilities of a human.”

To compensate for its limitations, MacCormick notes that other technologies have been developed that, while they’re considered by many to be artificially intelligent, don’t rely on AI. As an example, he pointed to Google’s self-driving car. “If you look at the Google self-driving car, the AI vision systems are there, but they don’t rely on them,” MacCormick said. “In terms of recognizing lane markings on the road or obstructions, they’re going to rely on other sensors that are more reliable, such as GPS, to get an exact location.”

Although it may not specifically rely on AI, MacCormick still believes that with new and improved algorithms emerging all the time, self-driving cars will eventually become a very real part of our daily fabric. And the incremental gains being achieved to make real AI systems won’t be limited to just self-driving cars. “One of the areas where we’re seeing pretty consistent improvement is translation of human languages,” he said. “I believe we’re going to continue to see high quality translations between human languages emerging. I’m not going to give a number in years, but I think it’s doable in the middle term.”

Ultimately, the uses and applications of artificial intelligence will still remain in the hands of their creators, according to MacCormick. “I’m an unapologetic optimist. I don’t think AIs are going to get out of control of humans and start doing things on their own,” he said. “As we get closer to systems that rival humans, they will still be systems that we have designed and are capable of controlling.”

That optimistic outlook would seemingly put MacCormick at odds with the views of the potential dangers of AI that have been voiced recently by the likes of Elon Musk, Stephen Hawking and Bill Gates. However, MacCormick says he agrees with their point that the ethical ramifications of artificial intelligence should be considered and guidance protocols developed.

“Everyone needs to be thinking about it and cooperating to be sure that we’re moving in the right direction,” MacCormick said. “At some point, all sorts of people need to be thinking about this, from philosophers and social scientists to technologists and computer scientists.”

MacCormick didn’t mince words when he cited the area of AI research where those protocols are most needed. The most obvious sub-field where protocols need to be in place, according to MacCormick, is military robotics. “As we become capable of building systems that are somewhat autonomous and can be used for lethal force in military conflicts, then the entire ethics of what should and should not be done really changes,” he said. “We need to be thinking about this and try to formulate the correct way of using autonomous systems.”

In the end, MacCormick’s optimistic view of the future, and the positive potentials of artificial intelligence, beams through clouds of uncertainty. “I like to take the optimistic view that we’ll be able to continue building these things and making them into useful tools that aren’t the same as humans, but have extraordinary capabilities,” MacCormick said. “And we can guide them and control them and use them for positive benefit.”

]]>