Toggle light / dark theme

The rising visibility of Ethical AI or AI Ethics is doing great good, meanwhile some believe it isn’t enough and a semblance of embracing Radical Ethical AI is appearing. This is closely examined, including for AI-based self-driving cars.

Has the prevailing tenor and attention of today’s widely emerging semblance of AI Ethics gotten into a veritable rut? Some seem to decidedly think so.

Let’s unpack this. You might generally be aware that there has been a rising tide of interest in the ethical ramifications of AI. This is often referred to as either AI Ethics or Ethical AI, which we’ll consider herein those two monikers as predominantly equivalent and interchangeable (I suppose some might quibble about that assumption, but I’d like to suggest that we not get distracted by the potential differences, if any, for the purposes of this discussion).

Summary: A new ethical framework proposes researchers should already assume brain organoids already have consciousness, rather than waiting for research to confirm they do.

Source: Kyoto University.

One way that scientists are studying how the human body grows and ages is by creating artificial organs in the laboratory. The most popular of these organs is currently the organoid, a miniaturized organ made from stem cells. Organoids have been used to model a variety of organs, but brain organoids are the most clouded by controversy.

The AI nanny is here! In a new feat for science, robots and AI can now be paired to optimise the creation of human life. In a Matrix-esque reality, robotics and artificial intelligence can now help to develop babies with algorithms and artificial wombs.

Reported by South China Morning Post, Chinese scientists in Suzhou have developed the new technology. However, there are worries surrounding the ethics of actually artificially growing human babies.

Fujitsu said it will establish an AI ethics and governance office to ensure the safe and secure deployment of AI technologies.

To be headed by Junichi Arahori, the new office will focus on implementing ethical measures related to the research, development, and implementation of AI and other machine learning applications.

“This marks the next step in Fujitsu’s ongoing efforts to strengthen and enforce comprehensive, company-wide measures to achieve robust AI ethics governance based on international best-practices, policies, and legal frameworks,” the company stated.

China is trailblazing AI regulation, with the goal of being the AI leader by 2030. We look at its #AI ethics guidelines.


The best agile and lean development conferences of 2022.

The European Union had issued a preliminary draft of AI-related rules in April 2021, but we’ve seen nothing final. In the United States, the notion of ethical AI has gotten some traction, but there aren’t any overarching regulations or universally accepted best practices.

Poor Artificial Intelligence (AI). For years, it has had to sit there (like a dormant Skynet) listening to its existence being debated, without getting to have a say. A recent debate held at the University of Oxford tried to put that right by including an AI participant in a debate on the topic of whether AI can ever be ethical.

The debate involved human participants, as well as the Megatron Transformer, an AI created by the Applied Deep Research team at computer-chip maker Nvidia. The Megatron has been trained on a dataset called “the pile”, which includes the whole of Wikipedia, 63 million English news articles, and 38 gigabytes of Reddit conversations — more than enough to break the mind of any human forced to do likewise.

“In other words, the Megatron is trained on more written material than any of us could reasonably expect to digest in a lifetime,” Oxford’s Professor Andrew Stephen wrote in a piece on the debate published in The Conversation. “After such extensive research, it forms its own views.”

They say ‘I believe in nature. Nature is harmonious’. Every big fish is eating every smaller fish. Every organ is fighting constantly invading bacteria. Is that what you mean by harmony? There are planets that are exploding out there. Meteorites that hit another and blow up. What’s the purpose of that? What’s the purpose of floods? To drown people? In other words, if you start looking for purpose, you gotta look all over, take in the whole picture. So, man projects his own values into nature. — Jacque Fresco (March 13, 1916 — May 18, 2017)

When most of us use the word ‘nature‘, we really don’t know much about it in reality. — Ursa.

Lethal autonomous weapons systems (LAWS), also called “killer robots” or “slaughterbots” being developed by a clutch of countries, have been a topic of debate with the international military, ethics, and human rights circles raising concerns. Recent talks about a ban on these killer robots have brought them into the spotlight yet again.

What Are Killer Robots?

The exact definition of a killer robot is fluid. However, most agree that they may be broadly described as weapons systems that use artificial intelligence (AI) to identify, select, and kill human targets without any meaningful human control.