This is not investment advice. The author has no position in any of the stocks mentioned. Wccftech.com has a disclosure and ethics policy. The National Aeronautics and Space Administration (NASA) has captured rare footage of a black hole eating up a start and creating a gas cloud that is as large as the solar system.
Category: ethics – Page 21
They say that actors ought to fully immerse themselves into their roles. Uta Hagen, acclaimed Tony Award-winning actress and a legendary acting teacher said this: “It’s not about losing yourself in the role, it’s about finding yourself in the role.”
In today’s column, I’m going to take you on a journey of looking at how the latest in Artificial Intelligence (AI) can be used for role-playing. This is not merely play-acting. Instead, people are opting to use a type of AI known as Generative AI including the social media headline-sparking AI app ChatGPT as a means of seeking self-growth via role-playing.
You might be wondering why I didn’t showcase a more alarming example of generative AI role-playing. I could do so, and you can readily find such examples online. For example, there are fantasy-style role-playing games that have the AI portray a magical character with amazing capabilities, all of which occur in written fluency on par with a human player. The AI in its role might for example try to (in the role-playing scenario) expunge the human player or might berate the human during the role-playing game.
My aim here was to illuminate the notion that role-playing doesn’t have to necessarily be the kind that clobbers someone over the head and announces itself to the world at large. There are subtle versions of role-playing that generative AI can undertake. Overall, whether the generative AI is full-on role-playing or performing in a restricted mode, the question still stands as to what kind of mental health impacts might this functionality portend. There are the good, the bad, and the ugly associated with generative AI and role-playing games.
Every year, the Stanford Institute for Human-Centered Artificial Intelligence (HAI) puts out its AI Index, a massive compendium of data and graphs that tries to sum up the current state of artificial intelligence. The 2022 AI Index, which came out this week, is as impressive as ever, with 190 pages covering R&D, technical performance, ethics, policy, education, and the economy. I’ve done you a favor by reading every page of the report and plucking out 12 charts that capture the state of play.
It’s worth noting that many of the trends I reported from last year’s 2021 index still hold. For example, we are still living in a golden AI summer with ever-increasing publications, the AI job market is still global, and there’s still a disconcerting gap between corporate recognition of AI risks and attempts to mitigate said risks. Rather than repeat those points here, we refer you to last year’s coverage.
The Memo: https://lifearchitect.ai/memo/
Read the paper: https://arxiv.org/abs/2212.08073
GitHub repo: https://github.com/anthropics/ConstitutionalHarmlessnessPaper/tree/main/samples.
Chapters:
0:00 Opening.
3:59 Demonstration.
11:26 Explanation.
Dr Alan D. Thompson is a world expert in artificial intelligence (AI), specialising in the augmentation of human intelligence, and advancing the evolution of ‘integrated AI’. Alan’s applied AI research and visualisations are featured across major international media, including citations in the University of Oxford’s debate on AI Ethics in December 2021.
The biggest obstacle is that each robotics lab has its own idea of what a conscious robot looks like. There are also moral implications to building robots that have consciousness. Will they have rights, like in Bicentennial Man?
Considerations about conscious robots have been the domain of science fiction for decades. Isaac Asimov wrote several novels, including I, Robot, that examined the implications from the perspectives of law, society, and family, raising a lot of moral questions. Experts in ethical technology have considered and expanded upon these questions as scientists like those in the Columbia University lab work toward building more intelligent machines.
Science fiction has also brought us killer machines like in The Terminator, and conscious robots sound like a good way to have some. Humans might learn bad ideas and act upon them, and there is no reason to believe that robots will not fall into the same trap. Some of science’s greatest minds have warned against getting carried away with artificial intelligence.
CRISPR-Cas9 is a revolutionary gene editing tool that has wide spread implications for research, medical treatments, the environment, and ethics. In this pla…
The Memo: https://lifearchitect.ai/memo/
Demo site: https://muse-model.github.io/
Read the paper: https://arxiv.org/abs/2301.
Dr Alan D. Thompson is a world expert in artificial intelligence (AI), specialising in the augmentation of human intelligence, and advancing the evolution of ‘integrated AI’. Alan’s applied AI research and visualisations are featured across major international media, including citations in the University of Oxford’s debate on AI Ethics in December 2021.
Music:
H umans are at the center of most discussions about both the environment and technology. One goal of sustainability is to ensure that future generations of humans have opportunities to thrive on planet Earth. Debates about the ethics of technology often focus on how to protect human rights and promote human autonomy.
At the same time, some conversations about the environment and technology are now taking humans out of the equation. As Adam Kirsch points out in a new book, “The Revolt Against Humanity: Imagining a Future Without Us,” people in two very different schools of thought are coming to a similar conclusion: that the world might not have people much longer and might be better off as a result.
Kirsch takes readers on a guided tour of the discussions in these two camps. “Antihumanists” are obsessed with our having sown the seeds of our demise and bringing environmental apocalypse upon ourselves — possibly even deserving to go extinct. “Transhumanists” are obsessed with maintaining control and envision a future in which we use technology to become something greater than homo sapiens and even cheat death itself.
Support us! https://www.patreon.com/mlst.
Irina Rish is a world-renowned professor of computer science and operations research at the Université de Montréal and a core member of the prestigious Mila organisation. She is a Canada CIFAR AI Chair and the Canadian Excellence Research Chair in Autonomous AI. Irina holds an MSc and PhD in AI from the University of California, Irvine as well as an MSc in Applied Mathematics from the Moscow Gubkin Institute. Her research focuses on machine learning, neural data analysis, and neuroscience-inspired AI. In particular, she is exploring continual lifelong learning, optimization algorithms for deep neural networks, sparse modelling and probabilistic inference, dialog generation, biologically plausible reinforcement learning, and dynamical systems approaches to brain imaging analysis. Prof. Rish holds 64 patents, has published over 80 research papers, several book chapters, three edited books, and a monograph on Sparse Modelling. She has served as a Senior Area Chair for NeurIPS and ICML. Irina’s research is focussed on taking us closer to the holy grail of Artificial General Intelligence. She continues to push the boundaries of machine learning, continually striving to make advancements in neuroscience-inspired AI.
In a conversation about artificial intelligence (AI), Irina and Tim discussed the idea of transhumanism and the potential for AI to improve human flourishing. Irina suggested that instead of looking at AI as something to be controlled and regulated, people should view it as a tool to augment human capabilities. She argued that attempting to create an AI that is smarter than humans is not the best approach, and that a hybrid of human and AI intelligence is much more beneficial. As an example, she mentioned how technology can be used as an extension of the human mind, to track mental states and improve self-understanding. Ultimately, Irina concluded that transhumanism is about having a symbiotic relationship with technology, which can have a positive effect on both parties.
Tim then discussed the contrasting types of intelligence and how this could lead to something interesting emerging from the combination. He brought up the Trolley Problem and how difficult moral quandaries could be programmed into an AI. Irina then referenced The Garden of Forking Paths, a story which explores the idea of how different paths in life can be taken and how decisions from the past can have an effect on the present.
Mental health has become a widespread topic nowadays.
In the past, discussions concerning mental health were often hushed up or altogether swept under the rug. A gradual cultural change has led to openly considering mental health issues and eased qualms about doing so in publicly acknowledged ways.
You might give some of the credit for this change in overarching societal attitudes as an outcome of the advent of easily accessed smartphone apps that aid your personal mindfulness and presumably spur you toward mental well-being. There are apps for mindfulness, ones for meditation, ones for diagnosing your mental health status, ones for doing mental health screening, and so on.
People on social media are touting the use of generative AI such as ChatGPT as handy for interactively providing mental health advice. This is a worrisome trend. AI Ethics and AI Law are stressed out and cautioning that this is not a sound idea.