In a new paper, DeepMind describes an AI algorithm that was able to discover a critical programming rule in deep reinforcement learning from scratch.
Category: information science – Page 221
Note: This article was originally published on May 29, 2017, and updated on July 24, 2020 Overview Neural Networks is one of the most popular machine learning algorithms Gradient Descent forms the basis of Neural networks Neural networks can be implemented in both R and Python using certain libraries and packages.
Proteins are essential to the life of cells, carrying out complex tasks and catalyzing chemical reactions. Scientists and engineers have long sought to harness this power by designing artificial proteins that can perform new tasks, like treat disease, capture carbon, or harvest energy, but many of the processes designed to create such proteins are slow and complex, with a high failure rate.
In a breakthrough that could have implications across the healthcare, agriculture, and energy sectors, a team lead by researchers in the Pritzker School of Molecular Engineering (PME) at the University of Chicago has developed an artificial intelligence-led process that uses big data to design new proteins.
By developing machine-learning models that can review protein information culled from genome databases, the researchers found relatively simple design rules for building artificial proteins. When the team constructed these artificial proteins in the lab, they found that they performed chemistries so well that they rivaled those found in nature.
Yann LeCun, the chief AI scientist at Facebook, helped develop the deep learning algorithms that power many artificial intelligence systems today. In conversation with head of TED Chris Anderson, LeCun discusses his current research into self-supervised machine learning, how he’s trying to build machines that learn with common sense (like humans) and his hopes for the next conceptual breakthrough in AI.
This talk was presented at an official TED conference, and was featured by our editors on the home page.
Researchers at DeepMind propose a new technique that automatically discovers a reinforcement learning algorithm from scratch.
Large-scale oceanic phenomena are complicated and often involve many natural processes. Tropical instability wave (TIW) is one of these phenomena.
Pacific TIW, a prominent prevailing oceanic event in the eastern equatorial Pacific Ocean, is featured with cusp-shaped waves propagating westward at both flanks of the tropical Pacific cold tongue.
The forecast of TIW has long been dependent on physical equation-based numerical models or statistical models. However, many natural processes need to be considered for understanding such complicated phenomena.
An AI algorithm is capable of automatically generating realistic-looking images from bits of pixels.
Why it matters: The achievement is the latest evidence that AI is increasingly able to learn from and copy the real world in ways that may eventually allow algorithms to create fictional images that are indistinguishable from reality.
What’s new: In a paper presented at this week’s International Conference on Machine Learning, researchers from OpenAI showed they could train the organization’s GPT-2 algorithm on images.
The latest AI algorithms are probing the evolution of galaxies, calculating quantum wave functions, discovering new chemical compounds and more. Is there anything that scientists do that can’t be automated?
The snake bites its tail
Google AI can independently discover AI methods.
Then optimizes them
It Evolves algorithms from scratch—using only basic mathematical operations—rediscovering fundamental ML techniques & showing the potential to discover novel algorithms.
AutoML-Zero: new research that that can rediscover fundamental ML techniques by searching a space of different ways of combining basic mathematical operations. Arxiv: https://arxiv.org/abs/2003.
The high energy consumption of artificial neural networks’ learning activities is one of the biggest hurdles for the broad use of Artificial Intelligence (AI), especially in mobile applications. One approach to solving this problem can be gleaned from knowledge about the human brain.
Although it has the computing power of a supercomputer, it only needs 20 watts, which is only a millionth of the energy of a supercomputer.
One of the reasons for this is the efficient transfer of information between neurons in the brain. Neurons send short electrical impulses (spikes) to other neurons—but, to save energy, only as often as absolutely necessary.