Toggle light / dark theme

Meta-Learning Machines in a Single Lifelong Trial

The most widely used machine learning algorithms were designed by humans and thus are hindered by our cognitive biases and limitations. Can we also construct meta-learning algorithms that can learn better learning algorithms so that our self-improving AIs have no limits other than those inherited from computability and physics? This question has been a main driver of my research since I wrote a thesis on it in 1987. In the past decade, it has become a driver of many other people’s research as well. Here I summarize our work starting in 1994 on meta-reinforcement learning with self-modifying policies in a single lifelong trial, and — since 2003 — mathematically optimal meta-learning through the self-referential Gödel Machine. This talk was previously presented at meta-learning workshops at ICML 2020 and NeurIPS 2021. Many additional publications on meta-learning can be found at https://people.idsia.ch/~juergen/metalearning.html.

Jürgen Schmidhuber.
Director, AI Initiative, KAUST
Scientific Director of the Swiss AI Lab IDSIA
Co-Founder & Chief Scientist, NNAISENSE
http://www.idsia.ch/~juergen/blog.html.

AI predicts 70% of earthquakes a week before they occur

The system only flagged eight false warnings and missed one earthquake.

High precision and accuracy in earthquake prediction continues to be a key scientific challenge, and artificial intelligence (AI) has been investigated as a technique to enhance our capabilities in this crucial area.

This is because AI can analyze large datasets of seismic activity and identify patterns or anomalies that human analysts might miss. Machine learning algorithms can thus help researchers understand earthquake patterns better.

Stanford introduces autonomous robot dogs with AI brains

There’s a new kind of robot dog in town and it gets its prowess from an artificial intelligence (AI) algorithm.

An AI algorithm for a brain

The new vision-based algorithm, according to AI researchers at Stanford University and Shanghai Qi Zhi Institute who lead these efforts, enables the robodogs to scale tall objects, jump across gaps, crawl under low-hanging structures, and squeeze between cracks. This is because the robodog’s algorithm serves as its brain.

Researchers create a neural network for genomics that explains how it achieves accurate predictions

A team of New York University computer scientists has created a neural network that can explain how it reaches its predictions. The work reveals what accounts for the functionality of neural networks—the engines that drive artificial intelligence and machine learning—thereby illuminating a process that has largely been concealed from users.

The breakthrough centers on a specific usage of that has become popular in recent years—tackling challenging biological questions. Among these are examinations of the intricacies of RNA splicing—the focal point of the study—which plays a role in transferring information from DNA to functional RNA and protein products.

“Many neural networks are —these algorithms cannot explain how they work, raising concerns about their trustworthiness and stifling progress into understanding the underlying biological processes of genome encoding,” says Oded Regev, a computer science professor at NYU’s Courant Institute of Mathematical Sciences and the senior author of the paper, which was published in the Proceedings of the National Academy of Sciences.

New technique based on 18th-century mathematics shows simpler AI models don’t need deep learning

Researchers from the University of Jyväskylä were able to simplify the most popular technique of artificial intelligence, deep learning, using 18th-century mathematics. They also found that classical training algorithms that date back 50 years work better than the more recently popular techniques. Their simpler approach advances green IT and is easier to use and understand.

The recent success of artificial intelligence is significantly based on the use of one core technique: . Deep learning refers to techniques where networks with a large number of data processing layers are trained using massive datasets and a substantial amount of computational resources.

Deep learning enables computers to perform such as analyzing and generating images and music, playing digitized games and, most recently in connection with ChatGPT and other generative AI techniques, acting as a conversational agent that provides high-quality summaries of existing knowledge.

Likewise debuts Pix, an AI chatbot for entertainment recommendations

Likewise, the company behind an app that can recommend your next TV binge, movie to watch, podcast to stream or book to read, is out today with its own entertainment-focused AI companion, Pix. Built using a combination of Likewise’s own customer data and technology from partner OpenAI, Pix can make entertainment recommendations and answer other questions via text message or email, or by communicating with Pix within the Pix mobile app, website or even by speaking to Pix’s TV app using a voice remote.

Founded in 2017 by former Microsoft communications chief Larry Cohen with financial backing from Bill Gates, the recommendations startup aims to offer an easy way for people to discover new TV shows, movies, books, podcasts and more, as well as follow other users and make lists of their favorites to share. While today, recommendations are often baked into the streaming services or apps we use to play our entertainment content, Likewise maintains a registered user base of more than 6 million, and over 2 million monthly active users.

To build Pix, the company leveraged around 600 million consumer data points along with machine learning algorithms, as well as the natural language processing technology of OpenAI’s GPT 3.5 and 4. To work, the AI chatbot learns the preferences of the individual user and then provides them with personalized recommendations — similar to Likewise itself. In addition, the bot will reach out to users when new content becomes available that matches their interests.

Start of the Fully Fault Tolerant Age of Quantum Computers

Without full fault tolerance in quantum computers we will never practically get past 100 qubits but full fault tolerance will eventually open up the possibility of billions of qubits and beyond. In a Wright Brothers Kittyhawk moment for Quantum Computing, a fully fault-tolerant algorithm was executed on real qubits. They were only three qubits but this was never done on real qubits before.

This is the start of the fully fault tolerant age of quantum computers. For quantum computers to be the real deal of unlimited computing disruption then we needed full fault tolerance on real qubits.

/* */