Menu

Blog

Feb 5, 2016

Who’s to Blame (Part 1): The Legal Vacuum Surrounding Autonomous Weapons

Posted by in categories: biotech/medical, law, military, robotics/AI

Future of Life Institute illustrate their objection to automated lethal robots:

“Outrage swells within the international community, which demands that whoever is responsible for the atrocity be held accountable. Unfortunately, no one can agree on who that is”


The year is 2020 and intense fighting has once again broken out between Israel and Hamas militants based in Gaza. In response to a series of rocket attacks, Israel rolls out a new version of its Iron Dome air defense system. Designed in a huge collaboration involving defense companies headquartered in the United States, Israel, and India, this third generation of the Iron Dome has the capability to act with unprecedented autonomy and has cutting-edge artificial intelligence technology that allows it to analyze a tactical situation by drawing from information gathered by an array of onboard sensors and a variety of external data sources. Unlike prior generations of the system, the Iron Dome 3.0 is designed not only to intercept and destroy incoming missiles, but also to identify and automatically launch a precise, guided-missile counterattack against the site from where the incoming missile was launched. The day after the new system is deployed, a missile launched by the system strikes a Gaza hospital far removed from any militant activity, killing scores of Palestinian civilians. Outrage swells within the international community, which demands that whoever is responsible for the atrocity be held accountable. Unfortunately, no one can agree on who that is…

Much has been made in recent months and years about the risks associated with the emergence of artificial intelligence (AI) technologies and, with it, the automation of tasks that once were the exclusive province of humans. But legal systems have not yet developed regulations governing the safe development and deployment of AI systems or clear rules governing the assignment of legal responsibility when autonomous AI systems cause harm. Consequently, it is quite possible that many harms caused by autonomous machines will fall into a legal and regulatory vacuum. The prospect of autonomous weapons systems (AWSs) throws these issues into especially sharp relief. AWSs, like all military weapons, are specifically designed to cause harm to human beings—and lethal harm, at that. But applying the laws of armed conflict to attacks initiated by machines is no simple matter.

The core principles of the laws of armed conflict are straightforward enough. Those most important to the AWS debate are: attackers must distinguish between civilians and combatants; they must strike only when it is actually necessary to a legitimate military purpose; and they must refrain from an attack if the likely harm to civilians outweighs the military advantage that would be gained. But what if the attacker is a machine? How can a machine make the seemingly subjective determination regarding whether an attack is militarily necessary? Can an AWS be programmed to quantify whether the anticipated harm to civilians would be “proportionate?” Does the law permit anyone other than a human being to make that kind of determination? Should it?

Read more

Comments are closed.