Toggle light / dark theme

Machine learning reveals recipe for building artificial proteins

Proteins are essential to the life of cells, carrying out complex tasks and catalyzing chemical reactions. Scientists and engineers have long sought to harness this power by designing artificial proteins that can perform new tasks, like treat disease, capture carbon, or harvest energy, but many of the processes designed to create such proteins are slow and complex, with a high failure rate.

In a breakthrough that could have implications across the healthcare, agriculture, and energy sectors, a team lead by researchers in the Pritzker School of Molecular Engineering (PME) at the University of Chicago has developed an -led process that uses big data to design new proteins.

By developing machine-learning models that can review protein information culled from genome databases, the researchers found relatively simple design rules for building . When the team constructed these artificial proteins in the lab, they found that they performed chemistries so well that they rivaled those found in nature.

Deep learning, neural networks and the future of AI

Yann LeCun, the chief AI scientist at Facebook, helped develop the deep learning algorithms that power many artificial intelligence systems today. In conversation with head of TED Chris Anderson, LeCun discusses his current research into self-supervised machine learning, how he’s trying to build machines that learn with common sense (like humans) and his hopes for the next conceptual breakthrough in AI.

This talk was presented at an official TED conference, and was featured by our editors on the home page.

AI model to forecast complicated large-scale tropical instability waves in Pacific Ocean

Large-scale oceanic phenomena are complicated and often involve many natural processes. Tropical instability wave (TIW) is one of these phenomena.

Pacific TIW, a prominent prevailing oceanic event in the eastern equatorial Pacific Ocean, is featured with cusp-shaped waves propagating westward at both flanks of the tropical Pacific cold tongue.

The forecast of TIW has long been dependent on physical equation-based numerical models or statistical models. However, many natural processes need to be considered for understanding such complicated phenomena.

Researchers develop AI algorithm that can generate images

An AI algorithm is capable of automatically generating realistic-looking images from bits of pixels.

Why it matters: The achievement is the latest evidence that AI is increasingly able to learn from and copy the real world in ways that may eventually allow algorithms to create fictional images that are indistinguishable from reality.

What’s new: In a paper presented at this week’s International Conference on Machine Learning, researchers from OpenAI showed they could train the organization’s GPT-2 algorithm on images.

AutoML-Zero: Evolving Code that Learns

The snake bites its tail

Google AI can independently discover AI methods.

Then optimizes them

It Evolves algorithms from scratch—using only basic mathematical operations—rediscovering fundamental ML techniques & showing the potential to discover novel algorithms.

AutoML-Zero: new research that that can rediscover fundamental ML techniques by searching a space of different ways of combining basic mathematical operations. Arxiv: https://arxiv.org/abs/2003.


Machine learning (ML) has seen tremendous successes recently, which were made possible by ML algorithms like deep neural networks that were discovered through years of expert research. The difficulty involved in this research fueled AutoML, a field that aims to automate the design of ML algorithms. So far, AutoML has focused on constructing solutions by combining sophisticated hand-designed components. A typical example is that of neural architecture search, a subfield in which one builds neural networks automatically out of complex layers (e.g., convolutions, batch-norm, and dropout), and the topic of much research.

New learning algorithm should significantly expand the possible applications of AI

The high energy consumption of artificial neural networks’ learning activities is one of the biggest hurdles for the broad use of Artificial Intelligence (AI), especially in mobile applications. One approach to solving this problem can be gleaned from knowledge about the human brain.

Although it has the computing power of a supercomputer, it only needs 20 watts, which is only a millionth of the of a supercomputer.

One of the reasons for this is the efficient transfer of information between in the brain. Neurons send short electrical impulses (spikes) to other neurons—but, to save energy, only as often as absolutely necessary.

OpenAI’s fiction-spewing AI is learning to generate images

In February of last year, the San Francisco–based research lab OpenAI announced that its AI system could now write convincing passages of English. Feed the beginning of a sentence or paragraph into GPT-2, as it was called, and it could continue the thought for as long as an essay with almost human-like coherence.

Now, the lab is exploring what would happen if the same algorithm were instead fed part of an image. The results, which were given an honorable mention for best paper at this week’s International Conference on Machine Learning, open up a new avenue for image generation, ripe with opportunity and consequences.

/* */