Toggle light / dark theme

A top U.S. general has issued a sobering warning that both China and Russia, given their years of emphasis on upgrading and renovating their space war arsenals, could, in the future, place the United States in a position of weakness if matters were to degenerate into a state of war between the countries. Air Force Gen. John E. Hyten believes that China and Russia have been attempting to outpace the U.S. in military matters with regard to space and that the Pentagon is now moving to counter the foreseen “challenge” of possibly being outmaneuvered and outgunned in space. If a World War 3 scenario were to actualize, he thinks the U.S. should be prepared to meet said challenge.

The Washington Times reported last week that Air Force Gen. John E. Hyten, who has been chosen as the next commander of Strategic Command, told Congress’ Senate Armed Services Committee that the U.S. is moving to counter the threat of a space war disadvantage with China and Russia. He said China and Russia are currently in the process of developing anti-satellite missiles, laser guns, and maneuvering killer space robots that could, once deployed, knock out or incapacitate strategic U.S. communications, navigation and intelligence satellites. As military experts know, these craft are crucial to the maintenance and actionability of America’s high-technology warfare systems.

“The Department of Defense has aggressively moved out to develop responses to the threats that we see coming from China and Russia. I believe it’s essential that we go faster in our responses.”

Read more

A new survey of existing and planned smart weapons finds that AI is increasingly used to replace humans, not help them.

The Pentagon’s oft-repeated line on artificial intelligence is this: we need much more of it, and quickly, in order to help humans and machines work better alongside one another. But a survey of existing weapons finds that the U.S. military more commonly uses AI not to help but to replace human operators, and, increasingly, human decision making.

The report from the Elon Musk-funded Future of Life Institute does not forecast Terminators capable of high-level reasoning. At their smartest, our most advanced artificially intelligent weapons are still operating at the level of insects … armed with very real and dangerous stingers.

Read more

Robotic Custom Guards — could this be the answer to all countries problems for border security and police as part of fairness?


NK Technology, Beijing, Oct 2 : Ten robots have started working as customs officers at three ports in China’s Guangdong province, authorities said on Sunday.

They were the first batch of intelligent robots, to be used by Chinese customs at the ports of Gongbei, Hengqin and Zhongshan, Xinhua news agency reported.

The robots, named Xiao Hai, have state-of-the-art perception technology and are able to listen, speak, learn, see and walk.

Read more

A group of Polish cave divers have stumbled upon what would be the world’s deepest underwater cave. The cave is called Hranicka Propast, and it was recently examined with an underwater robot in the Czech Republic.

While scientists have always known this mysterious cave to be deep, it was until a team of spelunkers took a closer look that they realized just how astonishingly deep it was. They have measured it at 1,325 feet deep, which would make it the deepest cave yet discovered on Earth. The previous record holder is Pozzo del Merro, a cave in Italy that is 1,286 feet deep.

This isn’t your typical diving scenario, so the team needed a remote operative vehicle to access this cave. Still, scientists have dived their before — many times over the years, in fact. It has often been explored because it was formed from hot mineral water bubbling from the bottom, and not from rain coming down as is the case in most caves. It’s a very unusual geological feature.

Read more

A competition pitting artificial intelligence (AI) against human players in the classic video game Doom has demonstrated just how advanced AI learning techniques have become – but it’s also caused considerable controversy.

While several teams submitted AI agents for the deathmatch, two students in the US have caught most of the flak, after they published a paper online detailing how their AI bot learned to kill human players in deathmatch scenarios.

The computer science students, Devendra Chaplot and Guillaume Lample, from Carnegie Mellon University, used deep learning techniques to train their AI bot – nicknamed Arnold – to navigate the 3D environment of the first-person shooter Doom.

Read more