Toggle light / dark theme

The upward spiral of artificial intelligence looks set to produce machines which are cleverer and more powerful than any humans. What happens when machines can themselves create super-intelligent machines? ‘The Singularity’ is the name science fiction writers gave to this situation. Philosopher David Chalmers discusses the philosophical implications of this very real possibility with Nigel Warburton in this episode of the Philosophy Bites podcast.

A new Artificial Intelligence from Microsoft and OpenAI utilizes advanced µ-Parametrization in order to increase performance and allow huge AI models to be trained much more easily. This means that we can soon expect AI models which beat any old ones and even humans at trillions of parameters in the future. The future of machine learning sure looks interesting.

TIMESTAMPS:
00:00 The AI Bottleneck.
00:38 SKILLSHARE: MAKE YOUR OWN AI
02:28 How AI was made much faster.
04:37 A new Revolutionary Artificial Intelligence.
07:18 Hardcoding Machine Intelligence.
09:17 Last Words.

#ai #deepmind #google

The video surprises viewers when it’s revealed that, while the woman on screen is a real person, the main character speaking is an AI. It aims to demonstrate how entertainment studios can leverage AI to create highly convincing romantic encounters. This marks a significant milestone for Sonantic as its technology is now able to recreate subtle emotions and non-speech sounds, while also opening up new creative possibilities for studios.

The voice models, which already express a range of human emotions from happiness to sadness, can now convey subtleties such as flirty, coy, and teasing, amongst other new “Style” options. They also have the ability to capture non-speech sounds – such as breaths, scoffs, and laughs. This combination of advances in speech synthesis makes Sonantic’s platform more comprehensive than ever before, helping entertainment studios create life-like performances in record time.

“Human beings are incredibly complex by nature and our voices play a critical role in helping us connect with the world around us,” said Zeena Qureshi, CEO. “Sonantic is committed to capturing the nuances of the human voice, and we’re incredibly proud of these technological breakthroughs that we have brought to life through ‘What’s Her Secret?’. From flirting and giggling, to breathing and pausing, this is the most realistic romantic demo we’ve created to date, helping us inch closer to our vision of being the CGI of audio.”

Summary: Training a machine learning algorithm with synthetic data for image classification can rival one trained on traditional datasets.

Source: MIT

Huge amounts of data are needed to train machine-learning models to perform image classification tasks, such as identifying damage in satellite photos following a natural disaster. However, these data are not always easy to come by. Datasets may cost millions of dollars to generate, if usable data exist in the first place, and even the best datasets often contain biases that negatively impact a model’s performance.

Human Level AI may be here sooner rather than later. As neural networks far surpass that computing power of the human brain, the prospect of a truly general AI becomes reachable. The economic value will be profound as AI will add trillions of dollars into the economy.

#ArtificialIntelligence #Superintelligence #AGI #HumanlevelAI #Exponentialtechnology #Singularity #Economichistory #simulation #metaverse #neuralink #trillionaire #nickbostrom #raykurzweil #computingpower

H/T Ben Dickson.

Artificial intelligence research has made great achievements in solving specific applications, but we’re still far from the kind of general-purpose AI systems that scientists have been dreaming of for decades.

Among the solutions being explored to overcome the barriers of AI is the idea of neuro-symbolic systems that bring together the best of different branches of computer science. In a talk at the IBM Neuro-Symbolic AI Workshop, Joshua Tenenbaum, professor of computational cognitive science at the Massachusetts Institute of Technology, explained how neuro-symbolic systems can help to address some of the key problems of current AI systems.

Among the many gaps in AI, Tenenbaum is focused on one in particular: “How do we go beyond the idea of intelligence as recognizing patterns in data and approximating functions and more toward the idea of all the things the human mind does when you’re modeling the world, explaining and understanding the things you’re seeing, imagining things that you can’t see but could happen, and making them into goals that you can achieve by planning actions and solving problems?”


The field of machine learning on quantum computers got a boost from new research removing a potential roadblock to the practical implementation of quantum neural networks. While theorists had previously believed an exponentially large training set would be required to train a quantum neural network, the quantum No-Free-Lunch theorem developed by Los Alamos National Laboratory shows that quantum entanglement eliminates this exponential overhead.

“Our work proves that both and big entanglement are valuable in quantum machine learning. Even better, entanglement leads to scalability, which solves the roadblock of exponentially increasing the size of the data in order to learn it,” said Andrew Sornborger, a computer scientist at Los Alamos and a coauthor of the paper published Feb. 18 in Physical Review Letters. “The theorem gives us hope that quantum neural networks are on track towards the goal of quantum speed-up, where eventually they will outperform their counterparts on classical computers.”

The classical No-Free-Lunch theorem states that any machine-learning algorithm is as good as, but no better than, any other when their performance is averaged over all possible functions connecting the data to their labels. A direct consequence of this theorem that showcases the power of data in classical machine learning is that the more data one has, the better the average performance. Thus, data is the currency in machine learning that ultimately limits performance.