Toggle light / dark theme

Drones, hackers and mercenaries — The future of war | DW Documentary

A shadow war is a war that, officially, does not exist. As mercenaries, hackers and drones take over the role armies once played, shadow wars are on the rise.

States are evading their responsibilities and driving the privatization of violence. War in the grey-zone is a booming business: Mercenaries and digital weaponry regularly carry out attacks, while those giving orders remain in the shadows.

Despite its superior army, the U.S. exhausted its military resources in two seemingly endless wars. Now, the superpower is finally bringing its soldiers home. But while the U.S.’s high-tech army may have failed in Afghanistan, it continues to operate outside of official war zones. U.S. Special Forces conduct targeted killings, using drones, hacks and surveillance technologies. All of this is blurring the lines between war and peace.

The documentary also shows viewers how Russian mercenaries and hackers destabilized Ukraine. Indeed, the last decade has seen the rise of cyberspace armament. Hacking, sometimes subsidized by states, has grown into a thriving business. Digital mercenaries sell spy software to authoritarian regimes. Criminal hackers attack any target that can turn a profit for their clients.

But the classic mercenary business is also taking off, because states no longer want their official armies to go into battle. Former mercenary Sean McFate outlines how privatizing warfare creates an even greater demand for it. He warns that a world of mercenaries is a world dominated by war.

#documentary #dwdocumentary.

AI and Human Enhancement: Americans’ Openness Is Tempered by a Range of Concerns

Developments in artificial intelligence and human enhancement technologies have the potential to remake American society in the coming decades. A new Pew Research Center survey finds that Americans see promise in the ways these technologies could improve daily life and human abilities. Yet public views are also defined by the context of how these technologies would be used, what constraints would be in place and who would stand to benefit – or lose – if these advances become widespread.

Fundamentally, caution runs through public views of artificial intelligence (AI) and human enhancement applications, often centered around concerns about autonomy, unintended consequences and the amount of change these developments might mean for humans and society. People think economic disparities might worsen as some advances emerge and that technologies, like facial recognition software, could lead to more surveillance of Black or Hispanic Americans.

This survey looks at a broad arc of scientific and technological developments – some in use now, some still emerging. It concentrates on public views about six developments that are widely discussed among futurists, ethicists and policy advocates. Three are part of the burgeoning array of AI applications: the use of facial recognition technology by police, the use of algorithms by social media companies to find false information on their sites and the development of driverless passenger vehicles.

Technology in agriculture is reaching new heights

BOW ISLAND, AB — Patrick Fabian is quickly picking up a new skill. The seed farmer plans to start using drones…


BOW ISLAND, AB – Patrick Fabian is quickly picking up a new skill.

The seed farmer plans to start using drones to monitor his 1,250 irrigated acres.

“On our farm, it will be mostly for crop surveillance to check the middle of the fields and the things we can’t normally see properly without walking every single square foot of the farm,” Fabian said.

Retina-inspired sensors for more adaptive visual perception

To monitor and navigate real-world environments, machines and robots should be able to gather images and measurements under different background lighting conditions. In recent years, engineers worldwide have thus been trying to develop increasingly advanced sensors, which could be integrated within robots, surveillance systems, or other technologies that can benefit from sensing their surroundings.

Researchers at Hong Kong Polytechnic University, Peking University, Yonsei University and Fudan University have recently created a new sensor that can collect data in various illumination conditions, employing a mechanism that artificially replicates the functioning of the retina in the human eye. This bio-inspired sensor, presented in a paper published in Nature Electronics, was fabricated using phototransistors made of molybdenum disulfide.

“Our research team started the research on five years ago,” Yang Chai, one of the researchers who developed the sensor, told TechXplore. “This emerging device can output light-dependent and history-dependent signals, which enables image integration, weak signal accumulation, spectrum analysis and other complicated image processing functions, integrating the multifunction of sensing, data storage and data processing in a single device.”

The US Army throws $20 million into AI-equipped, foldable quadcopters

The U.S. Army has awarded a $20 million a year contract to a California-based drone manufacturer, named Skydio, as part of its efforts to move away from foreign-made and commercially available off-the-shelf drones. Skydio revealed in a press release that it would supply its X2D drones for the U.S. Army’s Short Range Reconnaissance (SSR) Program.

With an aim to equip its soldiers with rapidly deployable aerial solutions that can conduct reconnaissance and surveillance activities over short ranges, the Army’s SSR program has been considering small drones for some time now. More than 30 vendors submitted their proposals to the Army, and five finalists were shortlisted for rigorous testing.

The Drive accessed a federal contract from 2017 that listed the minimal specifications of the SSR program which include a flight time of 30 minutes, a range of 1.86 nautical miles (3 km), and the ability to tolerate winds up to 15 knots. With the singular purpose of reconnaissance, the drone does not need to have swappable payloads but it should support mapping missions and the ability to geotag imagery. U.S. Army has awarded a $20 million a year contract to a California-based drone manufacturer, named Skydio, as part of its efforts to move away from foreign-made and commercially available off-the-shelf drones. Skydio revealed in a press release that it would supply its X2D drones for the U.S. Army’s Short Range Reconnaissance (SSR) Program.

Former Valve economist calls Facebook’s metaverse ‘a Steam-like digital economy’ with Zuckerberg as its ‘techno-lord’

Yanis Varoufakis also discussed “pay-to-earn” and the blockchain’s long-term consequences.


Former Greek Finance Minister and one-time in-house economist at Valve, Yanis Varoufakis, gave a long and freewheeling interview to the website, the Crypto Syllabus, focusing on the blockchain, its potential and disappointments, and where it sits in the larger context of politics, surveillance, and economics.

Of particular note to PC Gamer readers is his description of his time with Valve. Varoufakis had access to Valve’s data on Steam’s nascent player-to-player marketplace in the early 2010s, which he used to advise the company and his own economics research. Describing Valve’s initial pitch to him, Varoufakis said:

Why Timnit Gebru Isn’t Waiting for Big Tech to Fix AI’s Problems

For the past decade, AI has been quietly seeping into daily life, from facial recognition to digital assistants like Siri or Alexa. These largely unregulated uses of AI are highly lucrative for those who control them but are already causing real-world harms to those who are subjected to them: false arrests; health care discrimination; and a rise in pervasive surveillance that, in the case of policing, can disproportionately affect Black people and disadvantaged socioeconomic groups.

Gebru is a leading figure in a constellation of scholars, activists, regulators, and technologists collaborating to reshape ideas about what AI is and what it should be. Some of her fellow travelers remain in Big Tech, mobilizing those insights to push companies toward AI that is more ethical. Others, making policy on both sides of the Atlantic, are preparing new rules to set clearer limits on the companies benefiting most from automated abuses of power. Gebru herself is seeking to push the AI world beyond the binary of asking whether systems are biased and to instead focus on power: who’s building AI, who benefits from it, and who gets to decide what its future looks like.

Full Story:


The day after our Zoom call, on the anniversary of her departure from Google, Gebru launched the Distributed AI Research (DAIR) Institute, an independent research group she hopes will grapple with how to make AI work for everyone. “We need to let people who are harmed by technology imagine the future that they want,” she says.

When Gebru was a teenager, war broke out between Ethiopia, where she had lived all her life, and Eritrea, where both of her parents were born. It became unsafe for her to remain in Addis Ababa, the Ethiopian capital. After a “miserable” experience with the U.S. asylum system, Gebru finally made it to Massachusetts as a refugee. Immediately, she began experiencing racism in the American school system, where even as a high-achieving teenager she says some teachers discriminated against her, trying to prevent her taking certain AP classes. Years later, it was a pivotal experience with the police that put her on the path toward ethical technology. She recalls calling the cops after her friend, a Black woman, was assaulted in a bar. When they arrived, the police handcuffed Gebru’s friend and later put her in a cell. The assault was never filed, she says. “It was a blatant example of systemic racism.”

/* */