Toggle light / dark theme

AI voices include subtleties of speech

The video surprises viewers when it’s revealed that, while the woman on screen is a real person, the main character speaking is an AI. It aims to demonstrate how entertainment studios can leverage AI to create highly convincing romantic encounters. This marks a significant milestone for Sonantic as its technology is now able to recreate subtle emotions and non-speech sounds, while also opening up new creative possibilities for studios.

The voice models, which already express a range of human emotions from happiness to sadness, can now convey subtleties such as flirty, coy, and teasing, amongst other new “Style” options. They also have the ability to capture non-speech sounds – such as breaths, scoffs, and laughs. This combination of advances in speech synthesis makes Sonantic’s platform more comprehensive than ever before, helping entertainment studios create life-like performances in record time.

“Human beings are incredibly complex by nature and our voices play a critical role in helping us connect with the world around us,” said Zeena Qureshi, CEO. “Sonantic is committed to capturing the nuances of the human voice, and we’re incredibly proud of these technological breakthroughs that we have brought to life through ‘What’s Her Secret?’. From flirting and giggling, to breathing and pausing, this is the most realistic romantic demo we’ve created to date, helping us inch closer to our vision of being the CGI of audio.”

When It Comes to AI, Can We Ditch the Datasets?

Summary: Training a machine learning algorithm with synthetic data for image classification can rival one trained on traditional datasets.

Source: MIT

Huge amounts of data are needed to train machine-learning models to perform image classification tasks, such as identifying damage in satellite photos following a natural disaster. However, these data are not always easy to come by. Datasets may cost millions of dollars to generate, if usable data exist in the first place, and even the best datasets often contain biases that negatively impact a model’s performance.

The Path Towards Human Level AI

Human Level AI may be here sooner rather than later. As neural networks far surpass that computing power of the human brain, the prospect of a truly general AI becomes reachable. The economic value will be profound as AI will add trillions of dollars into the economy.

#ArtificialIntelligence #Superintelligence #AGI #HumanlevelAI #Exponentialtechnology #Singularity #Economichistory #simulation #metaverse #neuralink #trillionaire #nickbostrom #raykurzweil #computingpower

Neuro-symbolic AI brings us closer to machines with common sense

H/T Ben Dickson.

Artificial intelligence research has made great achievements in solving specific applications, but we’re still far from the kind of general-purpose AI systems that scientists have been dreaming of for decades.

Among the solutions being explored to overcome the barriers of AI is the idea of neuro-symbolic systems that bring together the best of different branches of computer science. In a talk at the IBM Neuro-Symbolic AI Workshop, Joshua Tenenbaum, professor of computational cognitive science at the Massachusetts Institute of Technology, explained how neuro-symbolic systems can help to address some of the key problems of current AI systems.

Among the many gaps in AI, Tenenbaum is focused on one in particular: “How do we go beyond the idea of intelligence as recognizing patterns in data and approximating functions and more toward the idea of all the things the human mind does when you’re modeling the world, explaining and understanding the things you’re seeing, imagining things that you can’t see but could happen, and making them into goals that you can achieve by planning actions and solving problems?”


Joshua Tenenbaum, professor of computational cognitive science at the MIT, explained how neuro-symbolic systems can help to address some of the key problems of current AI systems.

Entanglement unlocks scaling for quantum machine learning

The field of machine learning on quantum computers got a boost from new research removing a potential roadblock to the practical implementation of quantum neural networks. While theorists had previously believed an exponentially large training set would be required to train a quantum neural network, the quantum No-Free-Lunch theorem developed by Los Alamos National Laboratory shows that quantum entanglement eliminates this exponential overhead.

“Our work proves that both and big entanglement are valuable in quantum machine learning. Even better, entanglement leads to scalability, which solves the roadblock of exponentially increasing the size of the data in order to learn it,” said Andrew Sornborger, a computer scientist at Los Alamos and a coauthor of the paper published Feb. 18 in Physical Review Letters. “The theorem gives us hope that quantum neural networks are on track towards the goal of quantum speed-up, where eventually they will outperform their counterparts on classical computers.”

The classical No-Free-Lunch theorem states that any machine-learning algorithm is as good as, but no better than, any other when their performance is averaged over all possible functions connecting the data to their labels. A direct consequence of this theorem that showcases the power of data in classical machine learning is that the more data one has, the better the average performance. Thus, data is the currency in machine learning that ultimately limits performance.

The promise of AI with Demis Hassabis — DeepMind: The Podcast (Season 2, Episode 9)

Hannah wraps up the series by meeting DeepMind co-founder and CEO, Demis Hassabis. In an extended interview, Demis describes why he believes AGI is possible, how we can get there, and the problems he hopes it will solve. Along the way, he highlights the important role of consciousness and why he’s so optimistic that AI can help solve many of the world’s major challenges. As a final note, Demis shares the story of a personal meeting with Stephen Hawking to discuss the future of AI and discloses Hawking’s parting message.

For questions or feedback on the series, message us on Twitter @DeepMind or email [email protected].

Interviewee: Deepmind co-founder and CEO, Demis Hassabis.

Credits.
Presenter: Hannah Fry.
Series Producer: Dan Hardoon.
Production support: Jill Achineku.
Sounds design: Emma Barnaby.
Music composition: Eleni Shaw.
Sound Engineer: Nigel Appleton.
Editor: David Prest.
Commissioned by DeepMind.

Thank you to everyone who made this season possible!

Further reading:

/* */