Toggle light / dark theme

Read & tell me what you think 🙂


There is a rift between near and long-term perspectives on AI safety – one that has stirred controversy. Longtermists argue that we need to prioritise the well-being of people far into the future, perhaps at the expense of people alive today. But their critics have accused the Longtermists of obsessing on Terminator-style scenarios in concert with Big Tech to distract regulators from more pressing issues like data privacy. In this essay, Mark Bailey and Susan Schneider argue that we shouldn’t be fighting about the Terminator, we should be focusing on the harm to the mind itself – to our very freedom to think.

There has been a growing debate between near and long-term perspectives on AI safety – one that has stirred controversy. “Longtermists” have been accused of being co-opted by Big Tech and fixating on science fiction-like Terminator-style scenarios to distract regulators from the real, more near-term, issues, such as algorithmic bias and data privacy.

Longtermism is an ethical theory that requires us to consider the effects of today’s decisions on all of humanity’s potential futures. It can lead to extremes, as it concludes that one should sacrifice the present wellbeing of humanity for the good of humanity’s potential futures. Many Longtermists believe humans will ultimately lose control of AI, as it will become “superintelligent”, outthinking humans in every domain – social acumen, mathematical abilities, strategic thinking, and more.

Gerard Johnstone and Akela Cooper’s 2022 sci-fi horror M3GAN (streaming now on Peacock) follows the tragic childhood of Cady, a young girl whose parents are killed in a car accident. In the aftermath, Cady goes to live with her Aunt Gemma, a roboticist who has invented the Model 3 Generative Android, M3GAN for short.

M3GAN is a child-sized anthropomorphic robot with human level intelligence, designed to be the perfect friend. M3GAN’s primary directive is to keep Cady safe and happy, and it’s a job she executes with deadly seriousness. In the real world, scientists in China recently crafted a less deadly but equally spine-tingling intelligent robot controlled not by a person or by programming, but by a spheroid blob of human brain tissue.

Author: Kiyana Rahimian.

“Pick all the cells with traffic lights”
 Often, when opening a website, users have to verify that they’re not automated software by completing a test where they select certain parts of a picture. Such tests are types of Automated Turing tests. The Turing test, developed by Alan Turing, helps determine if technology can replicate human intelligence. As of now, even though engineering has sought to replicate brain-like functions through developing artificial intelligence (AI), there’s been no success in replicating human brain functions. Although the human brain processes basic information, like numbers, at slower rates than machines, they are better able to process complex information. Intuitive reasoning gives human brains the means to perform considerably better with little, diverse, and/or incomplete information.8 Compared to silicon-based computers, human brains are better at data storage and are way more energy efficient.

Is a cross-disciplinary multimedia performance piece featuring self-developed found material robots, real-time AI generation, motion tracking, audio spatialization, and bio-feedback-based audio synthesis. The immersive piece challenges the human-centric perspective and invites audiences to contemplate the coexistence of technology, nature, and us.

Credits in alphabetical order):
Co-Directors: Mingyong Cheng, Sophia Sun, Han Zhang.
Performers: Yuemeng Gu, Erika Roos.
Robotic Engineer: Sophia Sun.
Visual Artist: Mingyong Cheng.
Sound Designer: Han Zhang.
Lighting Engineer: Zehao Wang, Han Zhang.
Video Editor: Yuemeng Gu.
Post Production Coordinator: Mingyong Cheng.
Technical \& Installation Support: Yifan Guo, Ke Li, Zehao Wang, Zetao Yu.

Special thanks to Palka Puri for plant support, the Initiative for Digital Exploration of Arts and Sciences (IDEAS) program at the University of California San Diego and Qualcomm Institute for sponsoring this project, and the AV team from the California Institute for Telecommunications and Information Technology (Calit2) for installation and media support.

The proposed innovative design leads to unprecedented power conversion efficiency and improved versatility. A recently developed wirelessly powered 5G relay could accelerate the development of smart factories, report scientists from Tokyo Tech. By adopting a lower operating frequency for wireless power transfer, the proposed relay design solves many of the current limitations, including range and efficiency. In turn, this allows for a more versatile and widespread arrangement of sensors and transceivers in industrial settings.

One of the hallmarks of the Information Age is the transformation of industries towards a greater flow of information. This can be readily seen in high-tech factories and warehouses, where wireless sensors and transceivers are installed in robots, production machinery, and automatic vehicles. In many cases, 5G networks are used to orchestrate operations and communications between these devices.

To avoid relying on cumbersome wired power sources, sensors and transceivers can be energized remotely via wireless power transfer (WPT). However, one problem with conventional WPT designs is that they operate at 24 GHz. At such high frequencies, transmission beams must be extremely narrow to avoid energy losses. Moreover, power can only be transmitted if there is a clear line of sight between the WPT system and the target device. Since 5G relays are often used to extend the range of 5G base stations, WPT needs to reach even further, which is yet another challenge for 24 GHz systems.

Four-legged animals are innately capable of agile and adaptable movements, which allow them to move on a wide range of terrains. Over the past decades, roboticists worldwide have been trying to effectively reproduce these movements in quadrupedal (i.e., four-legged) robots.

Computational models trained via reinforcement learning have been found to achieve particularly promising results for enabling agile locomotion in quadruped robots. However, these models are typically trained in simulated environments and their performance sometimes declines when they are applied to real robots in real-world environments.

Alternative approaches to realizing agile quadruped locomotion utilize footage of moving animals collected by and cameras as demonstrations, which are used to train controllers (i.e., algorithms for executing the movements of robots). This approach, dubbed “imitation learning,” was found to enable the reproduction of animal-like movements in some quadrupedal robots.