Despite worries about threats from artificial intelligence, debates about the proper role of government regulation of AI have generally been lacking.
Category: robotics/AI – Page 2470
Technology » Forum Email Print Ban Killer Robots before They Become Weapons of Mass Destruction By Peter Asaro | August 7, 2015 Vladislav Ociacia/Thinkstock SA Forum is an invited essay from experts on topical issues in science and technology. Last week the Future of Life Institute released a letter signed by some 1,500 artificial intelligence (AI), robotics and technology researchers. Among them were celebrities of science and the technology industry—Stephen Hawking, Elon Musk and Steve Wozniak—along with public intellectuals such as Noam Chomsky and Daniel Dennett. The letter called for an international ban on offensive autonomous weapons, which could target and fire weapons without meaningful human control.
This week is the 70th anniversary of the atomic bombing of the Japanese cities of Hiroshima and Nagasaki, together killing over 200,000 people, mostly civilians. It took 10 years before the physicist Albert Einstein and philosopher Bertrand Russell, along with nine other prominent scientists and intellectuals, issued a letter calling for global action to address the threat to humanity posed by nuclear weapons. They were motivated by the atomic devastation in Japan but also by the escalating arms race of the Cold War that was rapidly and vastly increasing the number, destructive capability, and efficient delivery of nuclear arms, draining vast resources and putting humanity at risk of total destruction. They also note in their letter that those who knew the most about the effects of such weapons were the most concerned and pessimistic about their continued development and use.
The Future of Life Institute letter is significant for the same reason: It is signed by a large group of those who know the most about AI and robotics, with some 1,500 signatures at its release on July 28 and more than 17,000 today. Signatories include many current and former presidents, fellows and members of the American Association of Artificial Intelligence, the Association of Computing Machinery and the IEEE Robotics & Automation Society; editors of leading AI and robotics journals; and key players in leading artificial-intelligence companies such as Google DeepMind and IBM’s Watson team. As Max Tegmark, Massachusetts Institute of Technology physics professor and a founder of the Future of Life Institute, told Motherboard, “This is the AI experts who are building the technology who are speaking up and saying they don’t want anything to do with this.”
Due to recent issues with Hitchbot, the Asimov’s rules of robotics have been updated.
Warforged are robots right?? Who cares, sharing is caring.
In the next half of this century, humans will be regularly engaging in sexual activity with robots, and maybe even falling in love with them, according to Helen Driscoll, a psychologist who specialises in sex and mate choices at the University of Sunderland in the UK.
Her comments came while discussing the technological advances that are making sex dolls more interactive than ever before, and they present a future eerily similar to that of the Joaquin Phoenix film Her, where the main character falls in love with an operating system on his computer (voiced by Scarlett Johansson no less, who could blame him?)
“As virtual reality becomes more realistic and immersive and is able to mimic and even improve on the experience of sex with a human partner; it is conceivable that some will choose this in preference to sex with a less than perfect human being,” Driscoll told David Watkinson from the Daily Mirror.
The Millennium Project released today its annual “2015–16 State of the Future” report, listing global trends on 28 indicators of progress and regress, new insights into 15 Global Challenges, and impacts of artificial intelligence, synthetic biology, nanotechnology and other advanced technologies on employment over the next 35 years.
“Another 2.3 billion people are expected to be added to the planet in just 35 years,” the report notes. “By 2050, new systems for food, water, energy, education, health, economics, and global governance will be needed to prevent massive and complex human and environmental disasters.”
“The idea of rationality is a shared construct between AI and economics. When we frame questions in AI, we say: what are the objectives, what should be optimized and what do we know about the world we’re in? The AI/economics interface has become quite fertile because there is a shared language of utility, probability, and reasoning about others.”
More than a thousand prominent thinkers and leading AI and robotics researchers have signed an open letter calling for a ban on “offensive autonomous weapons beyond meaningful human control.”
Concerns about the future of artificial intelligence (AI) have recently gained coverage thanks to pioneers like Hawking, Gates, and Musk, though certainly others have been peering down that rabbit hole for some time. While we certainly need to keep our eyes on the far-reaching, it behooves us to take a closer look at the social issues that are right under our noses.
The question of artificial intelligence transforming industry is not a question of when — it’s already happening — but rather of how automation is creeping in and impacting some of the biggest influencers in the economic sphere i.e. transportation, healthcare, and others, some of which may surprise you.
I recently discussed these near-at-hand social implications and ambiguities with Steve Omohundro, CEO and founder of Possibility Research.
Social Implications of AI
In the words of Mr. Omohundro, we’re “on the verge of major transformation” in myriad ways. Consider near-term economics. McKinsey&Company have estimated that the presence of AI automation could impact the economy by $10 to $25 trillion in the next 10 years. Gartner, an information technology research group, estimates that 1/3 of all jobs will be relegated to the world of AI by 2025.
Evidence of these trends is particularly relevant in the areas of “the cloud” i.e. the ‘Internet of All Things’, delivery of services, knowledge work, and emerging markets. Tesla recently announced a software upgrade that would allow its self-driving cars to take better control on highways. In the same market, Daimler just released the first 18-wheeler ‘Freelance Inspiration” truck that will be given autonomous reign of Nevada (NV) freeways.
Leaps are already being made in the areas of healthcare and medicine; engineering and architecture (a Chinese design company recently produced 10 3D-printed houses in 24 hours); and, perhaps one that’s not as obvious — the legal profession.
Dealing with Shades of Grey
Electronic discovery i.e. ediscovery is the electronic solution that is now used in identifying, collecting and producing electronically stored information (ESI) (emails, voicemail, databases, etc.) in response to a request for production in a law suite of investigation. The legal industry leverages this software in dealing with companies that sometimes have millions of emails, through which this natural language program helps sift and search.
Future impacts in the legal industry could resound in areas where much of what human lawyers do is considered quite routine, such as creating contracts. This type of work usually has a high ticket price, so there is tremendous incentive to automate these types of tasks.
There also exists the overlap and spill over of impacts of AI from one industry to the next, and the legal industry is right at the intersection. Think back to the autonomous cars. Lawyers are not poised to look forward to new and weird questions such as, ‘What if a self-driving car hits and kills a person? Who’s responsible? The people who built the car, or faulty software?’ We are on the cusp of having an “onslaught of new technology with very little clue of how to manage it”, says Omohundro.
Big Data and AI Implications
Another area that is changing the lay of the land is big data, which is constantly being applied by consumer companies as they gather data about consumers and then target ads based on this information. Once again, the question arises of how to manage this process and define legal restrictions.
Price fixing presents another ambiguous case. It’s illegal to collaborate with other companies in the same business to set prices, and a recent case arose in which an online seller looked as if it was ‘polluting’ the space and fixing prices. Turns out, the seller was running bots to check competitors’ prices, which were then adjusted according to an algorithm. “What happens when the bot is doing the price fixing; is that illegal?” Apparently so, looking at the outcome of the case, but the question of volition is a valid one.
Along a similar vein, a Swiss group working in the name of art created a bot, gave it bitcoin, hooked it up to the ‘dark net’, a realm of the Internet where people trade illegally, and had the bot randomly buy things. The art exhibit display was what the bot bought while roving the dark markets. “Police allowed the exhibit, and then came and arrested the bot…carted the computer away,” explains Omohundro. “Every aspect of today’s society is going to be transformed by these technologies.”
While there’s no succinct answers to any of the economic or ethical considerations of the “big questions” that Steve brought up in our conversation, he’s confidant that more informed and serious discourse will help us make better decisions of the human future — and I certainly hope he’s right.