Toggle light / dark theme

I hear this author; however, can it pass military basic training/ boot camp? Think not.


Back when Alphabet was known as Google, the company bought Boston Dynamics, makers of the amazingly advanced robot named Atlas. At the time, Google promised that Boston Dynamics would stop taking military contracts, as it often did. But here’s the open secret about Atlas: She can enlist in the US military anytime she wants.

Technology transfer is a two-way street. Traditionally we think of technology being transferred from the public to the private sector, with the internet as just one example. The US government invests in and develops all kinds of important technologies for war and espionage, and many of those technologies eventually make their way to American consumers in one way or another. When the government does so consciously with both military and civilian capabilities in mind, it’s called dual-use tech.

But just because a company might not actively pursue military contracts doesn’t mean that the US military can’t buy (and perhaps improve upon) technology being developed by private companies. The defense community sees this as more crucial than ever, as government spending on research and development has plummeted. About one-third of R&D was being done by private industry in the US after World War II, and two-thirds was done by the US government. Today it’s the inverse.

Read more

A new laser tag coming our way; however, this time when you’re tagged, you really are dead.


US officials tout the ‘unprecedented power’ of killing lasers to be released by 2023.

The US Army will deploy its first laser weapons by 2023, according to a recently released report.

Mary J. Miller, Deputy Assistant Secretary of the Army for Research and Technology, speaking before the House Armed Services Committee on Monday, stated that, “I believe we’re very close,” when asked about progress with directed-energy, or laser, weapons.

Read more

I see articles and reports like the following about military actually considering fully autonomous missals, drones with missals, etc. I have to ask myself what happened to the logical thinking.


A former Pentagon official is warning that autonomous weapons would likely be uncontrollable in real-world situations thanks to design failures, hacking, and external manipulation. The answer, he says, is to always keep humans “in the loop.”

The new report, titled “ Autonomous Weapons and Operational Risk,” was written by Paul Scharre, a director at the Center for a New American Security. Scharre used to work at the office of the Secretary of Defense where he helped the US military craft its policy on the use of unmanned and autonomous weapons. Once deployed, these future weapons would be capable of choosing and engaging targets of their own choosing, raising a host of legal, ethical, and moral questions. But as Scharre points out in the new report, “They also raise critically important considerations regarding safety and risk.”

As Scharre is careful to point out, there’s a difference between semi-autonomous and fully autonomous weapons. With semi-autonomous weapons, a human controller would stay “in the loop,” monitoring the activity of the weapon or weapons system. Should it begin to fail, the controller would just hit the kill switch. But with autonomous weapons, the damage that be could be inflicted before a human is capable of intervening is significantly greater. Scharre worries that these systems are prone to design failures, hacking, spoofing, and manipulation by the enemy.

Read more

The US military recently decided that Google’s Alpha Dog and Spot robots weren’t ready for active duty, leaving the four legged robots with nothing to do. In the meantime, Google is doing with its battery-powered Spot robot what we probably would — using it as a dog toy. The company recently unleashed it on Alex, the terrier that reportedly belongs to Android co-founder and Playground Global boss Andy Rubin. The adorable result is that Alex, clearly the boss of this arrangement, sees the hapless robot as an existential threat that must be barked at and harangued (no butt-sniffing, luckily).

The model is reportedly the only one that’s not in military hands, and there’s no word on what Google’s Boston Dynamics plans to do with it now. The military thought Spot could be a potential ground reconnaissance asset, but “the problem is, Spot in its current configuration doesn’t have the autonomy to do that,” says James Peneiro, the Ground Combat head of the Warfighting Lab. It would be shortsighted, of course, to think the robots need to be put to work right away. A lot of the self-balancing tech in Spot (and its ability to take a kick) can already be found in the next-generation humanoid Atlas Robot.

Read more

I agree 100% with this report by former pentagon official on AI systems involving missiles.


A new report written by a former Pentagon official who helped establish United States policy on autonomous weapons argues that such weapons could be uncontrollable in real-world environments where they are subject to design failure as well as hacking, spoofing and manipulation by adversaries.

In recent years, low-cost sensors and new artificial intelligence technologies have made it increasingly practical to design weapons systems that make killing decisions without human intervention. The specter of so-called killer robots has touched off an international protest movement and a debate within the United Nations about limiting the development and deployment of such systems.

The new report was written by Paul Scharre, who directs a program on the future of warfare at the Center for a New American Security, a policy research group in Washington, D.C. From 2008 to 2013, Mr. Scharre worked in the office of the Secretary of Defense, where he helped establish United States policy on unmanned and autonomous weapons. He was one of the authors of a 2012 Defense Department directive that set military policy on the use of such systems.

Read more

BMI’s (according to DARPA and David Axe) could begin as early as 2017 on humans. The plan is to use stentrodes. Testing has already proven success on sheep. I personally have concerns in both a health (as the article highlighted prone to blood clots) as well as anything connecting via Wi-Fi or the net with hackers trying to challenge themselves to prove anything is hackable; that before this goes live on a person we make sure that we have a more secure hack-resistant net before someone is injured or in case could injure someone else.


Soldiers could control drones with a thought.

Read more

The tiny injectable machine could turn your noodle into a remote control.

The Pentagon is attempting what was, until recently, an impossible technological feat—developing a high-bandwidth neural interface that would allow people to beam data from their minds to external devices and back.

That’s right—a brain modem. One that could allow a soldier to, for example, control a drone with his mind.

Read more