According to a study published in Scientific Reports.
Established in 2011, <em>Scientific Report</em>s is a peer-reviewed open-access scientific mega journal published by Nature Portfolio, covering all areas of the natural sciences. In September 2016, it became the largest journal in the world by number of articles, overtaking <em>PLOS ON</em>E.
I possibly cheated on my wife once. Alone in a room, a young woman reached out her hands and seductively groped mine, inviting me to engage and embrace her. I went with it.
Twenty seconds later, I pulled back and ripped off my virtual reality gear. Around me, dozens of tech conference goers were waiting in line to try the same computer program an exhibitor was hosting. I warned colleagues in line this was no game. It created real emotions and challenged norms of partnership and sexuality. But does it really? And who benefits from this?
Around the world, a minor sexual revolution is occurring. It’s not so much about people stepping outside their moral boundaries as much as it is about new technology. Virtual reality haptic suits, sexbots, and even implanted sexual devices—some controlled from around the world by strangers—are increasingly becoming used. Often called digisexuality, some people—especially those who find it awkward to fit into traditional sexual roles—are finding newfound relationships and more meaningful sex.
As with much new technology, problems abound. Psychologists warns that technology—especially interactive tech—is making humans more distant to the real world. Naysayers of the burgeoning techno-sex industry say this type of intimacy is not the real thing, and that it’s little different than a Pavlovian trick. But studies show the brain barely knows the difference from arousal via pornography versus being sexually active with a real person. If we take that one step further and engage with people in immersive virtual reality, our brain appears to know even less of the difference.
Why do AI ethics conferences fail? They fail because they don’t have a metatheory to explain how it is possible for ethical disagreements to emerge from phenomenologically different worlds, how those are revealed to us, and how shifts between them have shaped the development of Western civilization for the last several thousand years from the Greeks and Romans, through the Renaissance and Enlightenment.
So perhaps we’ve given up on the ethics hand-wringing a bit too early. Or more precisely, a third nonzero sum approach that combines ethics and reciprocal accountability is available that actually does explain this. But first, let’s consider the flaw in simple reciprocal accountability. Yes, right now we can use chatGPT to catch Chat-GPT cheats, and provide many other balancing feedbacks. But as has been noted above with reference to the colonization of Indigenous nations, once the technological/ developmental gap is sufficiently large those dynamics which operate largely under our control and in our favor can quickly change, and the former allies become the new masters.
Forrest Landry capably identified that problem during a recent conversation with Jim Rutt. The implication that one might draw is that, though we may not like it, there is in fact a role to play by axiology (or more precisely, a phenomenologically informed understanding of axiology). Zak Stein identifies some of that in his article “Technology is Not Values Neutral”. Lastly, Iain McGilchrist brings both of these topics, that of power and value, together using his metatheory of attention, which uses that same notion of reciprocal accountability (only here it is called opponent processing). And yes, there is historical precedent here too; we can point to biological analogues. This is all instantiated in the neurology of the brain, and it goes back at least as far as Nematostella vectensis, a sea anemone that lived 700 million years ago! So the opponent processing of two very different ways of attending to the world has worked for a very long time, by opposing two very different phenomenological worlds (and their associated ethical frameworks) to counterbalance each other.
There is a new catchphrase that some are using when it comes to talking about today’s generative AI. I am loath to repeat the phrase, but the angst in doing so is worth the chances of trying to curtail the usage going forward.
Are you ready?
Some have been saying that generative AI such as ChatGPT is so-called alien intelligence. Hogwash. This kind of phrasing has to be stopped. Here’s the reasons to do so.
This video will cover the philosophy of artificial intelligence, the branch of philosophy that explores what artificial intelligence specifically is, and other philosophical questions surrounding it like; Can a machine act intelligently? Is the human brain essentially a computer? Can a machine be alive like a human is? Can it have a mind and consciousness? Can we build A.I. and align it with our values and ethics? If so, what ethical systems do we choose?
We’re going to be covering all those equations and possible answers to them in what will hopefully be an easy-to-understand, 101-style manner.
== Subscribe for more videos like this on YouTube and turn on the notification bell to get more videos: https://tinyurl.com/thinkingdeeply ==
0:00 Introduction. 0:45 What is Artificial Intelligence? 1:13 Rene Descartes. 2:11 Alan Turing & the ‘Turing Test’ 3:42 A.I.M.A. & A.I. 4:45 Intelligent Agents. 5:40 Newell’s Definition. 6:26 Weak A.I. vs Strong A.I. 7:31 Narrow A.I. vs General A.I. vs Super Intelligence. 10:00 Computationalism. 10:44 Approaches to A.I. 13:32 Can a Machine Have Consciousness? 14:23 The ‘Chinese Room’ 16:30 Critical Responses. 17:18 The ‘Hard Problem of Consciousness’ 18:47 Philosophical Zombies. 21:20 New Questions in the Philosophy of A.I. 21:34 Singularitarianism. 24:40 A.I. Alignment. 26:45 The Orthogonality Thesis. 27:36 The Ethics of A.I. 30:56 Conclusion.
Descartes, 1,637, R., in Haldane, E. and Ross, G.R.T., translators, 1911, The Philosophical Works of Descartes, Volume 1, Cambridge, UK: Cambridge University Press.
Russell, S. & Norvig, P., 2009, Artificial Intelligence: A Modern Approach 3rd edition, Saddle River, NJ: Prentice Hall.
Progress is speeding up even as the world barrels toward one of innumerable disasters. What lies ahead, and what should we do when we get there? In the best-case scenario, we may still have control over our direction.
This special edition show is sponsored by Numerai, please visit them here with our sponsor link (we would really appreciate it) http://numer.ai/mlst.
Prof. Karl Friston recently proposed a vision of artificial intelligence that goes beyond machines and algorithms, and embraces humans and nature as part of a cyber-physical ecosystem of intelligence. This vision is based on the principle of active inference, which states that intelligent systems can learn from their observations and act on their environment to reduce uncertainty and achieve their goals. This leads to a formal account of collective intelligence that rests on shared narratives and goals.
To realize this vision, Friston suggests developing a shared hyper-spatial modelling language and transaction protocol, as well as novel methods for measuring and optimizing collective intelligence. This could harness the power of artificial intelligence for the common good, without compromising human dignity or autonomy. It also challenges us to rethink our relationship with technology, nature, and each other, and invites us to join a global community of sense-makers who are curious about the world and eager to improve it.
TOC: Intro [00:00:00] Numerai (Sponsor segment) [00:07:10] Designing Ecosystems of Intelligence from First Principles (Friston et al) [00:09:48] Information / Infosphere and human agency [00:18:30] Intelligence [00:31:38] Reductionism [00:39:36] Universalism [00:44:46] Emergence [00:54:23] Markov blankets [01:02:11] Whole part relationships / structure learning [01:22:33] Enactivism [01:29:23] Knowledge and Language [01:43:53] ChatGPT [01:50:56] Ethics (is-ought) [02:07:55] Can people be evil? [02:35:06] Ethics in Al, subjectiveness [02:39:05] Final thoughts [02:57:00]
Anti AI / AI ethics clowns now pushing.gov for some criminalization, on cue.
A nonprofit AI research group wants the Federal Trade Commission to investigate OpenAI, Inc. and halt releases of GPT-4.
OpenAI “has released a product GPT-4 for the consumer market that is biased, deceptive, and a risk to privacy and public safety. The outputs cannot be proven or replicated. No independent assessment was undertaken prior to deployment,” said a complaint to the FTC submitted today by the Center for Artificial Intelligence and Digital Policy (CAIDP).
Calling for “independent oversight and evaluation of commercial AI products offered in the United States,” CAIDP asked the FTC to “open an investigation into OpenAI, enjoin further commercial releases of GPT-4, and ensure the establishment of necessary guardrails to protect consumers, businesses, and the commercial marketplace.”