More news on DARPA’s new deep learning microchip for the military.
A military-funded breakthrough in microchips opens the door to portable deep learning.
More news on DARPA’s new deep learning microchip for the military.
A military-funded breakthrough in microchips opens the door to portable deep learning.
Posted in military
Future of Life Institute illustrate their objection to automated lethal robots:
“Outrage swells within the international community, which demands that whoever is responsible for the atrocity be held accountable. Unfortunately, no one can agree on who that is”
The year is 2020 and intense fighting has once again broken out between Israel and Hamas militants based in Gaza. In response to a series of rocket attacks, Israel rolls out a new version of its Iron Dome air defense system. Designed in a huge collaboration involving defense companies headquartered in the United States, Israel, and India, this third generation of the Iron Dome has the capability to act with unprecedented autonomy and has cutting-edge artificial intelligence technology that allows it to analyze a tactical situation by drawing from information gathered by an array of onboard sensors and a variety of external data sources. Unlike prior generations of the system, the Iron Dome 3.0 is designed not only to intercept and destroy incoming missiles, but also to identify and automatically launch a precise, guided-missile counterattack against the site from where the incoming missile was launched. The day after the new system is deployed, a missile launched by the system strikes a Gaza hospital far removed from any militant activity, killing scores of Palestinian civilians. Outrage swells within the international community, which demands that whoever is responsible for the atrocity be held accountable. Unfortunately, no one can agree on who that is…
Much has been made in recent months and years about the risks associated with the emergence of artificial intelligence (AI) technologies and, with it, the automation of tasks that once were the exclusive province of humans. But legal systems have not yet developed regulations governing the safe development and deployment of AI systems or clear rules governing the assignment of legal responsibility when autonomous AI systems cause harm. Consequently, it is quite possible that many harms caused by autonomous machines will fall into a legal and regulatory vacuum. The prospect of autonomous weapons systems (AWSs) throws these issues into especially sharp relief. AWSs, like all military weapons, are specifically designed to cause harm to human beings—and lethal harm, at that. But applying the laws of armed conflict to attacks initiated by machines is no simple matter.
The core principles of the laws of armed conflict are straightforward enough. Those most important to the AWS debate are: attackers must distinguish between civilians and combatants; they must strike only when it is actually necessary to a legitimate military purpose; and they must refrain from an attack if the likely harm to civilians outweighs the military advantage that would be gained. But what if the attacker is a machine? How can a machine make the seemingly subjective determination regarding whether an attack is militarily necessary? Can an AWS be programmed to quantify whether the anticipated harm to civilians would be “proportionate?” Does the law permit anyone other than a human being to make that kind of determination? Should it?
Cyber is still a challenge for soldiers on the battlefield.
Editor’s Note: This story has been updated to include comment from an industry official.
WASHINGTON — Cyber vulnerabilities continue to plague the Army’s battlefield communications, according to the Pentagon’s top weapons tester, while the service works to harden its network against cyber attacks.
The Army’s Warfighter Information Network-Tactical (WIN-T), the Mid-Tier Networking Vehicular Radio (MNVR), the Joint Battle Command-Platform (JBC-P) and the Rifleman Radio were all cited as having problematic cybersecurity vulnerabilities in a report released Monday by the Pentagon’s Director, Operational Test & Evaluation (DOT&E), J. Michael Gilmore.
Anyone, want to work for the SCO?
A little known Pentagon office tasked with tweaking existing U.S. military weapons is a key player in staying ahead of Russian and Chinese capabilities.
Russia’s new mind control exoskeleton.
THE era of the ‘robo-soldier’ is nearing as Russia claims to be perfecting machines that will revolutionise warfare.
Space is not a government program; it’s the rest of the Universe. Private space business is now a major factor, bent on finding investors interested in generating profits by making space more accessible to more people. Space business pays taxes to governments; it does not consume tax revenues. Further, space business can offer launch services to government agencies at highly competitive rates, thus saving taxpayer dollars. How can they do this, competing with government-funded boosters with a 50-year track record? Simple: governments have no incentive to cut costs. Traditional aerospace industry giants have a huge vested interest in boosters that were developed to military and NASA standards, among which economy was not even an issue. But innovative, competitive companies such as XCOR Aerospace and Mojave Aerospace, without such baggage (and overhead) can drive costs down dramatically. This is a proven principle: notice that we are no longer buying IBM PCs with 64 k of RAM for $5000 a unit.
Even more important in the long view, space is a literally astronomical reservoir of material and energy resources. The profit potential of even a single such resource, such as solar power collectors in space beaming microwave power to Earth, is in the trillions of dollars. What would it be worth to the world to reduce fossil fuel consumption by a factor of 20 or 100 while lowering energy costs? Can we afford to continue pretending that Earth is a closed system, doomed to eke out finite resources into a cold, dark future?
Can we afford space? Wrong question. Can businesses afford space? Yes. We get to reap the benefits of their innovative ideas and free competition without footing the bill.
DoD spending $12 to $15 billion of its FY17 budget on small bets that includes NextGen tech improvements — WOW. Given the DARPA new Neural Engineering System Design (NESD); guessing we may finally have a Brain Mind Interface (BMI) soldier in the future.
The Defense Department will invest the $12 billion to $15 billion from its Fiscal Year 2017 budget slotted for developing a Third Offset Strategy on several relatively small bets, hoping to produce game-changing technology, the vice chairman of the Joint Chiefs of Staff said.
I am in favor of responsible development of new technologies but, as always, it is a double-edged sword.
Scientists say smarter autonomous robots are dangerous and may start new era in warfare.
A neural interface being created by the United States military aims to greatly improve the resolution and connection speed between biological and non-biological matter.
The Defence Advanced Research Projects Agency (DARPA) — a branch of the U.S. military — has announced a new research and development program known as Neural Engineering System Design (NESD). This aims to create a fully implantable neural interface able to provide unprecedented signal resolution and data-transfer bandwidth between the human brain and the digital world.