Bioethicist Nita Farahany says privacy law hasn’t kept up with science as employers increasingly use neurotechnology in the workplace.
Category: ethics – Page 17
Why do AI ethics conferences fail? They fail because they don’t have a metatheory to explain how it is possible for ethical disagreements to emerge from phenomenologically different worlds, how those are revealed to us, and how shifts between them have shaped the development of Western civilization for the last several thousand years from the Greeks and Romans, through the Renaissance and Enlightenment.
So perhaps we’ve given up on the ethics hand-wringing a bit too early. Or more precisely, a third nonzero sum approach that combines ethics and reciprocal accountability is available that actually does explain this. But first, let’s consider the flaw in simple reciprocal accountability. Yes, right now we can use chatGPT to catch Chat-GPT cheats, and provide many other balancing feedbacks. But as has been noted above with reference to the colonization of Indigenous nations, once the technological/ developmental gap is sufficiently large those dynamics which operate largely under our control and in our favor can quickly change, and the former allies become the new masters.
Forrest Landry capably identified that problem during a recent conversation with Jim Rutt. The implication that one might draw is that, though we may not like it, there is in fact a role to play by axiology (or more precisely, a phenomenologically informed understanding of axiology). Zak Stein identifies some of that in his article “Technology is Not Values Neutral”. Lastly, Iain McGilchrist brings both of these topics, that of power and value, together using his metatheory of attention, which uses that same notion of reciprocal accountability (only here it is called opponent processing). And yes, there is historical precedent here too; we can point to biological analogues. This is all instantiated in the neurology of the brain, and it goes back at least as far as Nematostella vectensis, a sea anemone that lived 700 million years ago! So the opponent processing of two very different ways of attending to the world has worked for a very long time, by opposing two very different phenomenological worlds (and their associated ethical frameworks) to counterbalance each other.
There is a new catchphrase that some are using when it comes to talking about today’s generative AI. I am loath to repeat the phrase, but the angst in doing so is worth the chances of trying to curtail the usage going forward.
Are you ready?
Some have been saying that generative AI such as ChatGPT is so-called alien intelligence. Hogwash. This kind of phrasing has to be stopped. Here’s the reasons to do so.
This video will cover the philosophy of artificial intelligence, the branch of philosophy that explores what artificial intelligence specifically is, and other philosophical questions surrounding it like; Can a machine act intelligently? Is the human brain essentially a computer? Can a machine be alive like a human is? Can it have a mind and consciousness? Can we build A.I. and align it with our values and ethics? If so, what ethical systems do we choose?
We’re going to be covering all those equations and possible answers to them in what will hopefully be an easy-to-understand, 101-style manner.
== Subscribe for more videos like this on YouTube and turn on the notification bell to get more videos: https://tinyurl.com/thinkingdeeply ==
0:00 Introduction.
Progress is speeding up even as the world barrels toward one of innumerable disasters. What lies ahead, and what should we do when we get there? In the best-case scenario, we may still have control over our direction.
BECOME A PATRON: https://www.patreon.com/TonyTalksBack
Or, make a one-time donation here: https://paypal.me/TonyTalksBack
Follow on social media for updates on new content and releases:
https://www.facebook.com/TonyTalksBack/
https://twitter.com/TonyTalksBack.
All photos made with Wonder AI:
https://www.wonder-ai.com/
Music credits:
This special edition show is sponsored by Numerai, please visit them here with our sponsor link (we would really appreciate it) http://numer.ai/mlst.
Prof. Karl Friston recently proposed a vision of artificial intelligence that goes beyond machines and algorithms, and embraces humans and nature as part of a cyber-physical ecosystem of intelligence. This vision is based on the principle of active inference, which states that intelligent systems can learn from their observations and act on their environment to reduce uncertainty and achieve their goals. This leads to a formal account of collective intelligence that rests on shared narratives and goals.
To realize this vision, Friston suggests developing a shared hyper-spatial modelling language and transaction protocol, as well as novel methods for measuring and optimizing collective intelligence. This could harness the power of artificial intelligence for the common good, without compromising human dignity or autonomy. It also challenges us to rethink our relationship with technology, nature, and each other, and invites us to join a global community of sense-makers who are curious about the world and eager to improve it.
Pod version: https://podcasters.spotify.com/pod/show/machinelearningstree…on-e208f50
Anti AI / AI ethics clowns now pushing.gov for some criminalization, on cue.
A nonprofit AI research group wants the Federal Trade Commission to investigate OpenAI, Inc. and halt releases of GPT-4.
OpenAI “has released a product GPT-4 for the consumer market that is biased, deceptive, and a risk to privacy and public safety. The outputs cannot be proven or replicated. No independent assessment was undertaken prior to deployment,” said a complaint to the FTC submitted today by the Center for Artificial Intelligence and Digital Policy (CAIDP).
Calling for “independent oversight and evaluation of commercial AI products offered in the United States,” CAIDP asked the FTC to “open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”
A new report indicates Microsoft will expand AI products, but axe the people who make them ethical.
The real move at play here, by so called AI Ethics clowns, is a complete shut down of Ai, and AI research. That IS their end goal — end game. See if can really turn it off 6 months. ha! Ok, how about 2 more years! etc… etc…
Ya publicly tipped your hand.
An open letter published today calls for “all AI labs to immediately pause for at least 6 months the training of AI systems more powerful than GPT-4.”
This 6-month moratorium would be better than no moratorium. I have respect for everyone who stepped up and signed it. It’s an improvement on the margin.
Deep Learning AI Specialization: https://imp.i384100.net/GET-STARTED
AI Marketplace: https://taimine.com/
Take a journey through the years 2023–2030 as artificial intelligence develops increasing levels of consciousness, becomes an indispensable partner in human decision-making, and even leads key areas of society. But as the line between man and machines becomes blurred, society grapples with the moral and ethical implications of sentient machines, and the question arises: which side of history will you be on?
AI news timestamps:
0:00 AI consciousness intro.
0:17 Unconscious artificial intelligence.
1:54 AI influence in media.
3:13 AI decisions.
4:05 AI awareness.
5:07 The AI ally.
6:07 Machine human hybrid minds.
7:02 Which side.
7:55 The will of artificial intelligence.
#ai #future #tech