Toggle light / dark theme

Proteins are essential to the life of cells, carrying out complex tasks and catalyzing chemical reactions. Scientists and engineers have long sought to harness this power by designing artificial proteins that can perform new tasks, like treat disease, capture carbon, or harvest energy, but many of the processes designed to create such proteins are slow and complex, with a high failure rate.

In a breakthrough that could have implications across the healthcare, agriculture, and energy sectors, a team lead by researchers in the Pritzker School of Molecular Engineering (PME) at the University of Chicago has developed an -led process that uses big data to design new proteins.

By developing machine-learning models that can review protein information culled from genome databases, the researchers found relatively simple design rules for building . When the team constructed these artificial proteins in the lab, they found that they performed chemistries so well that they rivaled those found in nature.

Yann LeCun, the chief AI scientist at Facebook, helped develop the deep learning algorithms that power many artificial intelligence systems today. In conversation with head of TED Chris Anderson, LeCun discusses his current research into self-supervised machine learning, how he’s trying to build machines that learn with common sense (like humans) and his hopes for the next conceptual breakthrough in AI.

This talk was presented at an official TED conference, and was featured by our editors on the home page.

Large-scale oceanic phenomena are complicated and often involve many natural processes. Tropical instability wave (TIW) is one of these phenomena.

Pacific TIW, a prominent prevailing oceanic event in the eastern equatorial Pacific Ocean, is featured with cusp-shaped waves propagating westward at both flanks of the tropical Pacific cold tongue.

The forecast of TIW has long been dependent on physical equation-based numerical models or statistical models. However, many natural processes need to be considered for understanding such complicated phenomena.

An AI algorithm is capable of automatically generating realistic-looking images from bits of pixels.

Why it matters: The achievement is the latest evidence that AI is increasingly able to learn from and copy the real world in ways that may eventually allow algorithms to create fictional images that are indistinguishable from reality.

What’s new: In a paper presented at this week’s International Conference on Machine Learning, researchers from OpenAI showed they could train the organization’s GPT-2 algorithm on images.

The snake bites its tail

Google AI can independently discover AI methods.

Then optimizes them

It Evolves algorithms from scratch—using only basic mathematical operations—rediscovering fundamental ML techniques & showing the potential to discover novel algorithms.

AutoML-Zero: new research that that can rediscover fundamental ML techniques by searching a space of different ways of combining basic mathematical operations. Arxiv: https://arxiv.org/abs/2003.

The high energy consumption of artificial neural networks’ learning activities is one of the biggest hurdles for the broad use of Artificial Intelligence (AI), especially in mobile applications. One approach to solving this problem can be gleaned from knowledge about the human brain.

Although it has the computing power of a supercomputer, it only needs 20 watts, which is only a millionth of the of a supercomputer.

One of the reasons for this is the efficient transfer of information between in the brain. Neurons send short electrical impulses (spikes) to other neurons—but, to save energy, only as often as absolutely necessary.