Toggle light / dark theme

This is a disturbing article on the utilization of drones by Turkey in attacks in Syria. What is unclear to me is whether the drones were piloted or whether they were autonomous. This is a critical distinction for me because drones that are piloted by humans are under human control and are legal. Autonomous drones are killer robots and are immoral.

“Regardless of an exact death toll and damage evaluation, there is a general understanding that the Idlib attacks were an example of effective air warfare, in which killer drones, rather than piloted jets, played a key role. “My understanding is that Turkey compensated for its inability to fly jets over Idlib by using drones, lots of drones,” says Aron Lund, a fellow with U.S.-based think tank The Century Foundation.”

Ban Killer Robots!


Those who want to win do not prepare for wars of today — they prepare for wars that are to be fought tomorrow.

Much of the work undertaken by artificial intelligence involves a training process known as machine learning, where AI gets better at a task such as recognising a cat or mapping a route the more it does it. Now that same technique is being use to create new AI systems, without any human intervention.

For years, engineers at Google have been working on a freakishly smart machine learning system known as the AutoML system (or automatic machine learning system), which is already capable of creating AI that outperforms anything we’ve made.

Now, researchers have tweaked it to incorporate concepts of Darwinian evolution and shown it can build AI programs that continue to improve upon themselves faster than they would if humans were doing the coding.

When we think of the interaction between mankind and any type of artificial intelligence in mythology, literature, and pop culture, the outcomes are always negative for humanity, if not apocalyptic. In Greek mythology, the blacksmith god Hephaestus created automatons who served as his attendants, and one of them, Pandora, unleashed all the evils into the world. Mary Shelley wrote the character named the Monster in her 1818 novel Frankenstein, as the product of the delusions of grandeur of a scientist named Victor Frankenstein. In pop culture, the most notable cases of a once-benign piece of technology running amok is the supercomputer Hal in 2001 Space Odyssey and intelligent machines overthrowing mankind in The Matrix. Traditionally, our stories regarding the god-like creative impulse of man bring about something that will overthrow the creators themselves.

The artificial intelligence-powered art exhibition Forging the Gods, curated by Julia Kaganskiy currently on view at Transfer Gallery attempts to portray the interaction between humans and machines in a more nuanced manner, showcasing how this relationship already permeates our everyday lives. The exhibition also shows how this relation is, indeed, fully reflective of the human experience — meaning that machines are no more or less evil than we actually are.

Lauren McCarthy, with her works “LAUREN” (2017) and its follow-up “SOMEONE” (2019) riffs on the trends of smart homes: in the former, she installs and controls remote-controlled networked devices in the homes of some volunteers and plays a human version of Alexa, reasoning that she will be better than Amazon’s virtual assistant because, being a human, she can anticipate people’s needs. The follow-up SOMEONE was originally a live media performance consisting of a four-channel video installation (made to look like a booth one can find at The Wing) where gallery-goers would play human versions of Alexa themselves in the homes of some volunteers, who would have to call for “SOMEONE” in case they needed something from their smart-controlled devices. Unfortunately, what we see at Forging The Gods is the recorded footage of the original run of the performance, so we have to forgo playing God by, say, making someone’s lighting system annoyingly flicker on and off.

Department of Engineering, Aarhus University, is coordinating a FET-Open backed project to build an entirely new AI hardware technology using nano-scale spintronics that can radically change the way in which computers work. The project will develop a neuromorphic computing system using synaptic neurons implemented in spintronics: a novel AI hardware that can set a framework for AI software in a physical system built like a human brain, upping computer performance by up to 100.000 times.

While the Wuhan district in China was under quarantine, news surfaced of robots delivering food and, later, medical supplies. Meanwhile, in the United States, the French company NAVYA configured its autonomous passenger shuttles in Florida to transport COVID-19 tests to the Mayo Clinic from off-site test locations. As the weeks of stay-at-home orders and recommendations slip into months, the delivery robots that were seen as a joke, fad, or nuisance have in some instances found a way into the public consciousness as important tools to combat the spread of coronavirus. The question is, will their usefulness extend post-lockdown?

👽 Facial recognition and Covid 19 in Moscow, Russia.

Fyodor R.


MOSCOW – The Russian capital is home to a network of 178,000 surveillance cameras. Thousands of these cameras are already connected to facial recognition software under a program called “Safe City.” Police claim the technology has helped arrest more than 300 people.

It is an engineer’s dream to build a robot as competent as an insect at locomotion, directed action, navigation, and survival in complex conditions. But as well as studying insects to improve robotics, in parallel, robot implementations have played a useful role in evaluating mechanistic explanations of insect behavior, testing hypotheses by embedding them in real-world machines. The wealth and depth of data coming from insect neuroscience hold the tantalizing possibility of building complete insect brain models. Robotics has a role to play in maintaining a focus on functional understanding—what do the neural circuits need to compute to support successful behavior?

Insect brains have been described as “minute structures controlling complex behaviors” (1): Compare the number of neurons in the fruit fly brain (∼135,000) to that in the mouse (70 million) or human (86 billion). Insect brain structures and circuits evolved independently to solve many of the same problems faced by vertebrate brains (or a robot’s control program). Despite the vast range of insect body types, behaviors, habitats, and lifestyles, there are many surprising consistencies across species in brain organization, suggesting that these might be effective, efficient, and general-purpose solutions.

Unraveling these circuits combines many disciplines, including painstaking neuroanatomical and neurophysiological analysis of the components and their connectivity. An important recent advance is the development of neurogenetic methods that provide precise control over the activity of individual neurons in freely behaving animals. However, the ultimate test of mechanistic understanding is the ability to build a machine that replicates the function. Computer models let researchers copy the brain’s processes, and robots allow these models to be tested in real bodies interacting with real environments (2). The following examples illustrate how this approach is being used to explore increasingly sophisticated control problems, including predictive tracking, body coordination, navigation, and learning.