The explosion of AI to the tech scene is interesting but also scary at the same time.
A noted computer scientist at the University of Berkeley, Stuart Russell, has warned of dire consequences if artificial intelligence (AI) development remains unchecked. Russell is one of the noted scientists who co-signed a letter seeking a moratorium on releasing AI products for the next six months.
It would be hard for anyone to believe that ChatGPT has only been with us for a few months. While AI used to be a topic of discussion amongst a small group of computer researchers, the conversational chatbot has become a topic of conversation in mainstream media, too.
On Wednesday, Meta announced an AI model called the Segment Anything Model (SAM) that can identify individual objects in images and videos, even those not encountered during training, reports Reuters.
According to a blog post from Meta, SAM is an image segmentation model that can respond to text prompts or user clicks to isolate specific objects within an image. Image segmentation is a process in computer vision that involves dividing an image into multiple segments or regions, each representing a specific object or area of interest.
Silicon Valley AI company Cerebras released seven open source GPT models to provide an alternative to the tightly controlled and proprietary systems available today.
It seems some countries in Europe might ban ChatGPT due to privacy reasons.
Italy isn’t the only country reckoning with the rapid pace of AI progression and its implications for society. Other governments are coming up with their own rules for AI, which, whether or not they mention generative AI, will undoubtedly touch on it. Generative AI refers to a set of AI technologies that generate new content based on prompts from users. It is more advanced than previous iterations of AI, thanks in no small part to new large language models, which are trained on vast quantities of data.
There have long been calls for AI to face regulation. But the pace at which the technology has progressed is such that it is proving difficult for governments to keep up. Computers can now create realistic art, write entire essays, or even generate lines of code, in a matter of seconds.
“We have got to be very careful that we don’t create a world where humans are somehow subservient to a greater machine future,” Sophie Hackford, a futurist and global technology innovation advisor for American farming equipment maker John Deere, told CNBC’s “Squawk Box Europe” Monday.
Quick pulse check on Google’s Bard: according to a report from CNBC, the whole project is a total and complete mess — and no one seems to know what to actually do with the tech.
To recap, Bard is Google’s search-integrated, AI-powered chatbot, which was billed as a competitor to Microsft’s OpenAI tech-powered Bing Search just a few weeks ago.
But Google seriously fumbled the feature’s launch, with the bot’s first advertisement accidentally showcasing the bot’s inability to find and present accurate information to users. Google’s stock nosedived as a result, leading the company to lose $100 billion in a day.
Why do AI ethics conferences fail? They fail because they don’t have a metatheory to explain how it is possible for ethical disagreements to emerge from phenomenologically different worlds, how those are revealed to us, and how shifts between them have shaped the development of Western civilization for the last several thousand years from the Greeks and Romans, through the Renaissance and Enlightenment.
So perhaps we’ve given up on the ethics hand-wringing a bit too early. Or more precisely, a third nonzero sum approach that combines ethics and reciprocal accountability is available that actually does explain this. But first, let’s consider the flaw in simple reciprocal accountability. Yes, right now we can use chatGPT to catch Chat-GPT cheats, and provide many other balancing feedbacks. But as has been noted above with reference to the colonization of Indigenous nations, once the technological/ developmental gap is sufficiently large those dynamics which operate largely under our control and in our favor can quickly change, and the former allies become the new masters.
Forrest Landry capably identified that problem during a recent conversation with Jim Rutt. The implication that one might draw is that, though we may not like it, there is in fact a role to play by axiology (or more precisely, a phenomenologically informed understanding of axiology). Zak Stein identifies some of that in his article “Technology is Not Values Neutral”. Lastly, Iain McGilchrist brings both of these topics, that of power and value, together using his metatheory of attention, which uses that same notion of reciprocal accountability (only here it is called opponent processing). And yes, there is historical precedent here too; we can point to biological analogues. This is all instantiated in the neurology of the brain, and it goes back at least as far as Nematostella vectensis, a sea anemone that lived 700 million years ago! So the opponent processing of two very different ways of attending to the world has worked for a very long time, by opposing two very different phenomenological worlds (and their associated ethical frameworks) to counterbalance each other.
Disclaimed: None of it is real. It’s just a movie, made mostly with AI, which took care of writing the script, creating the concept art, generating all the voices, and participating in some creative decisions. The AI-generated voices used in this film do not reflect the opinions and thoughts of their original owners. This short film was created as a demonstration to showcase the potential of AI in filmmaking.
Artificial Intelligence is here to stay. How it is being applied—and, perhaps more importantly, regulated—are now the crucial questions to ask. Walter Isaacson speaks with former Google CEO Eric Schmidt about A.I.’s impact on life, politics, and warfare, as well as what can be done to keep it under control.
Originally aired on March 23, 2023.
Major support for Amanpour and Company is provided by the Anderson Family Charitable Fund, Sue and Edgar Wachenheim, III, Candace King Weir, Jim Attwood and Leslie Williams, Mark J. Blechner, Bernard and Denise Schwartz, Koo and Patricia Yuen, the Leila and Mickey Straus Family Charitable Trust, Barbara Hope Zuckerberg, Jeffrey Katz and Beth Rogers, the Filomen M. D’Agostino Foundation and Mutual of America.
IN THE NEAR FUTURE, we should anticipate certain technological developments that will forever change our world. For instance, today’s text-based ChatGPT will evolve to give rise to personal “conversational AI” assistants installed in smart glasses and contact lenses that will gradually phase out smartphones. Technological advances in fields such as AI, AR/VR, bionics, and cybernetics, will eventually lead to “generative AI”-powered immersive neurotechnology that enables you to create virtual environments and holographic messages directly from your thoughts, with your imagination serving as the “prompt engineer.” What will happen when everyone constantly broadcasts their mind?
Can the pursuit of experience lead to true enlightenment? Are we edging towards Experiential Nirvana on a civilizational level despite certain turbulent events?