Silicon Valley AI company Cerebras released seven open source GPT models to provide an alternative to the tightly controlled and proprietary systems available today.
Category: robotics/AI – Page 1,039
It seems some countries in Europe might ban ChatGPT due to privacy reasons.
Italy isn’t the only country reckoning with the rapid pace of AI progression and its implications for society. Other governments are coming up with their own rules for AI, which, whether or not they mention generative AI, will undoubtedly touch on it. Generative AI refers to a set of AI technologies that generate new content based on prompts from users. It is more advanced than previous iterations of AI, thanks in no small part to new large language models, which are trained on vast quantities of data.
There have long been calls for AI to face regulation. But the pace at which the technology has progressed is such that it is proving difficult for governments to keep up. Computers can now create realistic art, write entire essays, or even generate lines of code, in a matter of seconds.
“We have got to be very careful that we don’t create a world where humans are somehow subservient to a greater machine future,” Sophie Hackford, a futurist and global technology innovation advisor for American farming equipment maker John Deere, told CNBC’s “Squawk Box Europe” Monday.
Quick pulse check on Google’s Bard: according to a report from CNBC, the whole project is a total and complete mess — and no one seems to know what to actually do with the tech.
To recap, Bard is Google’s search-integrated, AI-powered chatbot, which was billed as a competitor to Microsft’s OpenAI tech-powered Bing Search just a few weeks ago.
But Google seriously fumbled the feature’s launch, with the bot’s first advertisement accidentally showcasing the bot’s inability to find and present accurate information to users. Google’s stock nosedived as a result, leading the company to lose $100 billion in a day.
Why do AI ethics conferences fail? They fail because they don’t have a metatheory to explain how it is possible for ethical disagreements to emerge from phenomenologically different worlds, how those are revealed to us, and how shifts between them have shaped the development of Western civilization for the last several thousand years from the Greeks and Romans, through the Renaissance and Enlightenment.
So perhaps we’ve given up on the ethics hand-wringing a bit too early. Or more precisely, a third nonzero sum approach that combines ethics and reciprocal accountability is available that actually does explain this. But first, let’s consider the flaw in simple reciprocal accountability. Yes, right now we can use chatGPT to catch Chat-GPT cheats, and provide many other balancing feedbacks. But as has been noted above with reference to the colonization of Indigenous nations, once the technological/ developmental gap is sufficiently large those dynamics which operate largely under our control and in our favor can quickly change, and the former allies become the new masters.
Forrest Landry capably identified that problem during a recent conversation with Jim Rutt. The implication that one might draw is that, though we may not like it, there is in fact a role to play by axiology (or more precisely, a phenomenologically informed understanding of axiology). Zak Stein identifies some of that in his article “Technology is Not Values Neutral”. Lastly, Iain McGilchrist brings both of these topics, that of power and value, together using his metatheory of attention, which uses that same notion of reciprocal accountability (only here it is called opponent processing). And yes, there is historical precedent here too; we can point to biological analogues. This is all instantiated in the neurology of the brain, and it goes back at least as far as Nematostella vectensis, a sea anemone that lived 700 million years ago! So the opponent processing of two very different ways of attending to the world has worked for a very long time, by opposing two very different phenomenological worlds (and their associated ethical frameworks) to counterbalance each other.
Disclaimed: None of it is real. It’s just a movie, made mostly with AI, which took care of writing the script, creating the concept art, generating all the voices, and participating in some creative decisions. The AI-generated voices used in this film do not reflect the opinions and thoughts of their original owners. This short film was created as a demonstration to showcase the potential of AI in filmmaking.
#AI #Filmmaking #Aliens #Movies #ScienceFiction #SciFi #Films
Artificial Intelligence is here to stay. How it is being applied—and, perhaps more importantly, regulated—are now the crucial questions to ask. Walter Isaacson speaks with former Google CEO Eric Schmidt about A.I.’s impact on life, politics, and warfare, as well as what can be done to keep it under control.
Originally aired on March 23, 2023.
Major support for Amanpour and Company is provided by the Anderson Family Charitable Fund, Sue and Edgar Wachenheim, III, Candace King Weir, Jim Attwood and Leslie Williams, Mark J. Blechner, Bernard and Denise Schwartz, Koo and Patricia Yuen, the Leila and Mickey Straus Family Charitable Trust, Barbara Hope Zuckerberg, Jeffrey Katz and Beth Rogers, the Filomen M. D’Agostino Foundation and Mutual of America.
For more from Amanpour and Company, including full episodes, click here: https://to.pbs.org/2NBFpjf.
IN THE NEAR FUTURE, we should anticipate certain technological developments that will forever change our world. For instance, today’s text-based ChatGPT will evolve to give rise to personal “conversational AI” assistants installed in smart glasses and contact lenses that will gradually phase out smartphones. Technological advances in fields such as AI, AR/VR, bionics, and cybernetics, will eventually lead to “generative AI”-powered immersive neurotechnology that enables you to create virtual environments and holographic messages directly from your thoughts, with your imagination serving as the “prompt engineer.” What will happen when everyone constantly broadcasts their mind?
#SelfTranscendence #metaverse #ConversationalAI #GenerativeAI #ChatGPT #SimulationSingularity #SyntellectEmergence #GlobalMind #MindUploading #CyberneticImmortality #SimulatedMultiverse #TeleologicalEvolution #ExperientialRealism #ConsciousMind
Can the pursuit of experience lead to true enlightenment? Are we edging towards Experiential Nirvana on a civilizational level despite certain turbulent events?
There is a new catchphrase that some are using when it comes to talking about today’s generative AI. I am loath to repeat the phrase, but the angst in doing so is worth the chances of trying to curtail the usage going forward.
Are you ready?
Some have been saying that generative AI such as ChatGPT is so-called alien intelligence. Hogwash. This kind of phrasing has to be stopped. Here’s the reasons to do so.
Has AI advanced too far and too fast? Does it represent an out-of-control threat to humanity? Some credible observers believe AI may have reached a tipping point, and that if research on the technology continues unchecked, AI could spin out of control and become dangerous.
This article explores how Google responded to ChatGPT by using foundation models and generative AI to create innovative products and improve its existing offerings. It also examines Google’s use of Safe AI when creating new products.
“Moreover, in all of these tasks, GPT-4’s performance is strikingly close to human-level performance, and often vastly surpasses prior models such as ChatGPT. Given the breadth and depth of GPT-4’s capabilities, we believe that it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system.”
We are indeed living in “interesting” times.
Paul Smith-Goodson is the Vice President and Principal Analyst covering AI and quantum for Moor Insights & Strategy. He is currently working on several research projects, one of which is a unique method of using machine learning for highly accurate prediction of real-time and future global propagation of HF radio signals.
“Give it a shot! We think this tool will transform your linguistic-visual process both in terms of creative power and discovery.”
San Francisco-based independent Artificial Intelligence research lab Midjourney unveiled their new “/describe” feature that transforms images into words in a tweet. The company, popular for its AI-fueled ability to create images based on a series of prompts, launched more features including “repeat” and “permutations” for its pro subscribers.
Paul DelSignore, creative technologist and artificial intelligence aficionado, took to Medium to break down how these could benefit users. He envisions a future with better search engine indexing and search functionality as a result of “/describe”.