Toggle light / dark theme

US state seeks to outlaw the use of armed robots

The US military and its contractors would be exempt.

Robots that are autonomous or semi-autonomous and carry weapons or offensive capabilities are often called armed robots. These robots can be employed in a variety of settings, including the military, law enforcement, industry, and security.

Today, many armed robots are controlled remotely by human operators who can keep a safe distance between themselves and the devices. This is particularly prevalent with military drones, as the operators control the aircraft and its weaponry from a distance, making the machines even more dangerous to civilians.


There are continuous efforts to establish rules and laws controlling the deployment of armed robots in order to reduce the risks involved. Now, one US state is trying to outlaw them altogether.

This is according to a press statement by the American Civil Liberties Union (ACLU) published on Wednesday.

A group of Massachusetts legislators, human rights organizations, and executives from the robotics sector are joining forces to support legislation that would make arming robots, as well as drones and other uncrewed devices, illegal.

GA-ASI Poised to Begin LongShot Flight Testing Phase

“We are extremely excited to get in the air!” said Mike Atwood, Vice President of Advanced Aircraft Programs at GA-ASI. “Flight testing will validate digital designs that have been refined throughout the course of the project. General Atomics is dedicated to leveraging this process to rapidly deliver innovative unmanned capabilities for national defense.”

About GA-ASI

General Atomics Aeronautical Systems, Inc. (GA-ASI), an affiliate of General Atomics, is a leading designer and manufacturer of proven, reliable RPA systems, radars, and electro-optic and related mission systems, including the Predator® RPA series and the Lynx® Multi-mode Radar. With more than eight million flight hours, GA-ASI provides long-endurance, mission-capable aircraft with integrated sensor and data link systems required to deliver persistent situational awareness. The company also produces a variety of sensor control/image analysis software, offers pilot training and support services, and develops meta-material antennas.

Space Force to create “integrated” units responsible for acquisition, maintenance and operations

NATIONAL HARBOR, Md. — U.S. chief space operations Gen. Chance Saltzman on Sept. 12 announced the Space Force will experiment with a new command structure where a unit is responsible for all aspects of a mission area, including training, procurement and operations.

Two integrated units will be established, each run by a Space Force colonel — one for space electronic warfare; and the other for positioning, navigation, and timing (PNT) satellites.

This is a departure from the current structure where responsibilities for procurement, maintenance, sustainment and operations are fragmented under separate chains of command, Saltzman said in a keynote speech at the Air & Space Forces Association’s annual conference.

4 Reasons Why Becoming a Type 2 Civilization Is a Bad Idea

The year one hundred two thousand twenty-three. A giant meteorite the size of Pluto is approaching the Solar System. It flies straight to Earth. But as the meteorite crosses Saturn’s orbit, a swarm of miner probes approaches it. The scan revealed no minerals on the object, so the searches returned with nothing.

Meanwhile, the Space Security Center in Alaska military personnel are setting up a laser. The Solar System witnesses a sudden flare and nothing remains of the dwarf-sized meteorite. Now, unless hydrogen miners on Jupiter post videos of another annihilation on social media… This is what the world will look like when humanity finally becomes a Type Two civilization on the Kardashev scale. We’ll have almost infinite energy reserves, the ability to prepare for interstellar flights, or to instantly destroy any threat. But will humanity really be safe? And what can ruin a Type Two civilization?

Move over AI, quantum computing will be the most powerful and worrying technology

Head over to our on-demand library to view sessions from VB Transform 2023. Register Here

In 2022, leaders in the U.S. military technology and cybersecurity community said that they considered 2023 to be the “reset year” for quantum computing. They estimated the time it will take to make systems quantum-safe will match the time that the first quantum computers that threaten their security will become available: both around four to six years. It is vital that industry leaders quickly start to understand the security issues around quantum computing and take action to resolve the issues that will arise when this powerful technology surfaces.

Quantum computing is a cutting-edge technology that presents a unique set of challenges and promises unprecedented computational power. Unlike traditional computing, which operates using binary logic (0s and 1s) and sequential calculations, quantum computing works with quantum bits, or qubits, that can represent an infinite number of possible outcomes. This allows quantum computers to perform an enormous number of calculations simultaneously, exploiting the probabilistic nature of quantum mechanics.

U.S. Cyborg Soldiers to Confront China’s Enhanced ‘Super Soldiers’ — Is This the Future of Military?

What do you think? China is doing it. The West is going to have to keep up. Have you seen the Netflix series Altered Carbon? It’s like that.


A U.S. Army video shows its concept of the soldier of the future. At first glance, it looks like it will only be a better-equipped soldier.

But the video mentions “neural enhancement.” That can mean a brain implant that connects a human to computers. The defense agency DARPA has been working on an advanced implant that would essentially put the human brain “online.” There could also be eye and ear implants and other circuitry under the skin to make the optimal fighting machine.

Americans will have to decide whether this is ethical because some in our military clearly want it.

FULL REPORT: https://www1.cbn.com/cbnnews/national-security/2021/april/ne…-soldiers.

Did Elon Musk prevent a Russia-Ukraine nuclear war?

New details of Musk’s involvement in the Ukraine-Russia war revealed in his biography.

Elon Musk holds many titles. He is the CEO of Tesla SpaceX and owns the social media company X, which was recently rebranded from Twitter. Going by an excerpt of his biography, published in the Washington Post.

According to the excerpt from Walter Isaacson’s book, Musk disabled his company Starlink’s satellite communication networks, which were being used by the Ukrainian military to attack the Russian naval fleet in Sevastopol, Crimea, sneakily. The Ukrainian army was using Starlink as a guide to target Russian ships and attack them with six small… More.


Musk’s biographer alleges he prevented nuclear war between Ukraine and Russia by turning off Starlink satellite network near Crimea, but Musk says, ‘SpaceX did not deactivate anything’.

Experts alone can’t handle AI — social scientists explain why the public needs a seat at the table

Are democratic societies ready for a future in which AI algorithmically assigns limited supplies of respirators or hospital beds during pandemics? Or one in which AI fuels an arms race between disinformation creation and detection? Or sways court decisions with amicus briefs written to mimic the rhetorical and argumentative styles of Supreme Court justices?

Decades of research show that most democratic societies struggle to hold nuanced debates about new technologies. These discussions need to be informed not only by the best available science but also the numerous ethical, regulatory and social considerations of their use. Difficult dilemmas posed by artificial intelligence are already… More.


Even AI experts are uneasy about how unprepared societies are for moving forward with the technology in a responsible fashion. We study the public and political aspects of emerging science. In 2022, our research group at the University of Wisconsin-Madison interviewed almost 2,200 researchers who had published on the topic of AI. Nine in 10 (90.3%) predicted that there will be unintended consequences of AI applications, and three in four (75.9%) did not think that society is prepared for the potential effects of AI applications.

Who gets a say on AI?

Industry leaders, policymakers and academics have been slow to adjust to the rapid onset of powerful AI technologies. In 2017, researchers and scholars met in Pacific Grove for another small expert-only meeting, this time to outline principles for future AI research. Senator Chuck Schumer plans to hold the first of a series of AI Insight Forums on Sept. 13, 2023, to help Beltway policymakers think through AI risks with tech leaders like Meta’s Mark Zuckerberg and X’s Elon Musk.

/* */