The AI nanny is here! In a new feat for science, robots and AI can now be paired to optimise the creation of human life. In a Matrix-esque reality, robotics and artificial intelligence can now help to develop babies with algorithms and artificial wombs.
Reported by South China Morning Post, Chinese scientists in Suzhou have developed the new technology. However, there are worries surrounding the ethics of actually artificially growing human babies.
Fujitsu said it will establish an AI ethics and governance office to ensure the safe and secure deployment of AI technologies.
To be headed by Junichi Arahori, the new office will focus on implementing ethical measures related to the research, development, and implementation of AI and other machine learning applications.
“This marks the next step in Fujitsu’s ongoing efforts to strengthen and enforce comprehensive, company-wide measures to achieve robust AI ethics governance based on international best-practices, policies, and legal frameworks,” the company stated.
China is trailblazing AI regulation, with the goal of being the AI leader by 2030. We look at its #AI ethics guidelines.
The best agile and lean development conferences of 2022.
The European Union had issued a preliminary draft of AI-related rules in April 2021, but we’ve seen nothing final. In the United States, the notion of ethical AI has gotten some traction, but there aren’t any overarching regulations or universally accepted best practices.
Poor Artificial Intelligence (AI). For years, it has had to sit there (like a dormant Skynet) listening to its existence being debated, without getting to have a say. A recent debate held at the University of Oxford tried to put that right by including an AI participant in a debate on the topic of whether AI can ever be ethical.
The debate involved human participants, as well as the Megatron Transformer, an AI created by the Applied Deep Research team at computer-chip maker Nvidia. The Megatron has been trained on a dataset called “the pile”, which includes the whole of Wikipedia, 63 million English news articles, and 38 gigabytes of Reddit conversations — more than enough to break the mind of any human forced to do likewise.
“In other words, the Megatron is trained on more written material than any of us could reasonably expect to digest in a lifetime,” Oxford’s Professor Andrew Stephen wrote in a piece on the debate published in The Conversation. “After such extensive research, it forms its own views.”
They say ‘I believe in nature. Nature is harmonious’. Every big fish is eating every smaller fish. Every organ is fighting constantly invading bacteria. Is that what you mean by harmony? There are planets that are exploding out there. Meteorites that hit another and blow up. What’s the purpose of that? What’s the purpose of floods? To drown people? In other words, if you start looking for purpose, you gotta look all over, take in the whole picture. So, man projects his own values into nature. — Jacque Fresco (March 13, 1916 — May 18, 2017)
When most of us use the word ‘nature‘, we really don’t know much about it in reality. — Ursa.
Lethal autonomous weapons systems (LAWS), also called “killer robots” or “slaughterbots” being developed by a clutch of countries, have been a topic of debate with the international military, ethics, and human rights circles raising concerns. Recent talks about a ban on these killer robots have brought them into the spotlight yet again.
What Are Killer Robots?
The exact definition of a killer robot is fluid. However, most agree that they may be broadly described as weapons systems that use artificial intelligence (AI) to identify, select, and kill human targets without any meaningful human control.
How close are we to having fully autonomous vehicles on the roads? Are they safe? In Chandler, Arizona a fleet of Waymo vehicles are already in operation. Waymo sponsored this video and provided access to their technology and personnel. Check out their safety report here: https://waymo.com/safety/
Who better to answer the pros and cons of artificial intelligence than an actual AI?
Students at Oxford’s Said Business School hosted an unusual debate about the ethics of facial recognition software, the problems of an AI arms race, and AI stock trading. The debate was unusual because it involved an AI participant, previously fed with a huge range of data such as the entire Wikipedia and plenty of news articles.
Over the last few months, Oxford University Alex Connock and Andrew Stephen have hosted sessions with their students on the ethics of technology with celebrated speakers – including William Gladstone, Denis Healey, and Tariq Ali. But now it was about time to allow an actual AI to contribute, sharing its own views on the issue of … itself.
The AI used was Megatron LLB Transformer, developed by a research team at the computer chip company Nvidia and based on work by Google. It was trained by consuming more content than a human could in a lifetime and was asked to defend and question the following motion: “This house believes that AI will never be ethical.”