Toggle light / dark theme

Lethal autonomous weapons systems (LAWS), also called “killer robots” or “slaughterbots” being developed by a clutch of countries, have been a topic of debate with the international military, ethics, and human rights circles raising concerns. Recent talks about a ban on these killer robots have brought them into the spotlight yet again.

What Are Killer Robots?

The exact definition of a killer robot is fluid. However, most agree that they may be broadly described as weapons systems that use artificial intelligence (AI) to identify, select, and kill human targets without any meaningful human control.

How close are we to having fully autonomous vehicles on the roads? Are they safe? In Chandler, Arizona a fleet of Waymo vehicles are already in operation. Waymo sponsored this video and provided access to their technology and personnel. Check out their safety report here: https://waymo.com/safety/

▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀
References:

Waymo Safety Reports — https://waymo.com/safety/

Driving Statistics — https://ve42.co/DrivingStats.

Who better to answer the pros and cons of artificial intelligence than an actual AI?


Students at Oxford’s Said Business School hosted an unusual debate about the ethics of facial recognition software, the problems of an AI arms race, and AI stock trading. The debate was unusual because it involved an AI participant, previously fed with a huge range of data such as the entire Wikipedia and plenty of news articles.

Over the last few months, Oxford University Alex Connock and Andrew Stephen have hosted sessions with their students on the ethics of technology with celebrated speakers – including William Gladstone, Denis Healey, and Tariq Ali. But now it was about time to allow an actual AI to contribute, sharing its own views on the issue of … itself.

Welcome to the future of moral dilemmas.

Not a day passes without a fascinating snippet on the ethical challenges created by “black box” artificial intelligence systems. These use machine learning to figure out patterns within data and make decisions — often without a human giving them any moral basis for how to do it.

Classics of the genre are the credit cards accused of awarding bigger loans to men than women, based simply on which gender got the best credit terms in the past. Or the recruitment AIs that discovered the most accurate tool for candidate selection was to find CVs containing the phrase “field hockey” or the first name “Jared”.

But wait, should we believe it?


An artificial intelligence warning AI researchers about the dangers of AI sounds like the setup of a delightful B movie, but truth is often stranger than fiction.

A professor and a fellow at the University of Oxford came face to face with that reality when they invited an AI to participate in a debate at the Oxford Union on, you guessed it, the ethics of AI. Specifically, as Dr. Alex Connock and Professor Andrew Stephen explain in the Conversation, the prompt was “This house believes that AI will never be ethical.” The AI, it seems, agreed.

“AI will never be ethical,” argued the Megatron-Turing Natural Language Generation model, which was notably trained on Wikipedia, Reddit, and millions of English-language news articles published between 2016 and 2019. “It is a tool, and like any tool, it is used for good and bad.”

PolyAI Ltd. is an ambitious startup that creates artificial voices to replace call center operators. Based in London, it has raised $28 million to bring AI-powered customer service to Metro Bank Plc, BP Plc and more. The idea is that instead of the nightmare of dialing random digits in a decision tree, you can instead ask to, say, book a table and a voice — with just the slightest inflection of its machine-learning origins — responds with great civility. That’s nice. But there was a brief moment two years ago when it wasn’t polite at all.

A software developer with PolyAI who was testing the system, asked about booking a table for himself and a Serbian friend. “Yes, we allow children at the restaurant,” the voice bot replied, according to PolyAI founder Nikola Mrksic. Seemingly out of nowhere, the bot was trying make an obnoxious joke about people from Serbia. When it was asked about bringing a Polish friend, it replied, “Yes, but you can’t bring your own booze.”

Full Story:


Money is pouring into artificial intelligence. Not so much into ethics. That’ll be a problem down the line.

Let me back up a moment. I recently concurred with megapundit Steven Pinker that over the last two centuries we have achieved material, moral and intellectual progress, which should give us hope that we can achieve still more. I expected, and have gotten, pushback. Pessimists argue that our progress will prove to be ephemeral; that we will inevitably succumb to our own nastiness and stupidity and destroy ourselves.

Maybe, maybe not. Just for the sake of argument, let’s say that within the next century or two we solve our biggest problems, including tyranny, injustice, poverty, pandemics, climate change and war. Let’s say we create a world in which we can do pretty much anything we choose. Many will pursue pleasure, finding ever more exciting ways to enjoy themselves. Others may seek spiritual enlightenment or devote themselves to artistic expression.

No matter what our descendants choose to do, some will surely keep investigating the universe and everything in it, including us. How long can the quest for knowledge continue? Not long, I argued 25 years ago this month in The End of Science, which contends that particle physics, cosmology, neuroscience and other fields are bumping into fundamental limits. I still think I’m right, but I could be wrong. Below I describe the views of three physicists—Freeman Dyson, Roger Penrose and David Deutsch—who hold that knowledge seeking can continue for a long, long time, and possibly forever, even in the face of the heat death of the universe.

Epicurus and epicurean philosophy may not be as popular as stoicism in today’s world, however, different philosophies might work for different people. The insights of epicurean ethics including God, death, pleasure, friends, love, and more can influence the way you might act. Although it may not be as popular today, it has still influenced many others throughout history such as Spinoza.
In this video, I explain the ‘four part cure for life’ as shown from The Epicurus Reader. This includes not fearing God or death and what pleasures one should strive for and avoid. I mainly quote from the letter to menoeceus, but the principal doctrines also provide good sayings.
I hope this video gives you some insight as to how you might act and hopefully you find something useful from it.

Song: FSM Team feat. escp — Lazy Afternoon.

Tags:
epicurus, epicurean, epicureanism, philosophy, philosopher, philosophize, four part cure, stoic, stoicism, how to act, how might you act, how to live, god, death, dying, mortality, pleasure, hedonism, pleasures, desire, passion, pain, body, drawings, drawing, animation, animations, video essay, essay, happy, happiness, joy, love, quotes, letter, menoeceus, ethics, hellenistic, principal doctrines, hedonist.