DeepMind has released a new paper that shows impressive advances in reinforcement learning. How far does it bring us toward general AI?
Category: robotics/AI – Page 1,712

Artificial intelligence uncovers the building blocks of life and paves the way for a new era in science
The idea is to offer the predictions for the structure of practically every protein with a known sequence of amino acids free of charge. “We believe that this is the most important contribution to date that artificial intelligence has contributed to scientific knowledge,” he said following the publication of DeepMind’s research in the medical journal Nature.
DeepMind, a company bought by Google, predicts with unprecedented precision the 3D structure of nearly all the proteins made by the human body.
The Pentagon Is Experimenting With Using Artificial Intelligence To “See Days In Advance”
The Pentagon aims to use cutting-edge cloud networks and artificial intelligence systems to anticipate adversaries’ moves before they make them.
Google’s own mobile chip is called Tensor
Rick Osterloh casually dropped his laptop onto the couch and leaned back, satisfied. It’s not a mic, but the effect is about the same. Google’s chief of hardware had just shown me a demo of the company’s latest feature: computational processing for video that will debut on the Pixel 6 and Pixel 6 Pro. The feature was only possible with Google’s own mobile processor, which it’s announcing today.
He’s understandably proud and excited to share the news. The chip is called Tensor, and it’s the first system-on-chip (SoC) designed by Google. The company has “been at this about five years,” he said, though CEO Sundar Pichai wrote in a statement that Tensor “has been four years in the making and builds off of two decades of Google’s computing experience.”
That software expertise is something Google has come to be known for. It led the way in computational photography with its Night Sight mode for low light shots, and weirded out the world with how successfully its conversational AI Duplex was able to mimic human speech — right down to the “ums and ahs.” Tensor both leverages Google’s machine learning prowess and enables the company to bring AI experiences to smartphones that it couldn’t before.



Pentagon believes its precognitive AI can predict events ‘days in advance’
The US military’s AI experiments are growing particularly ambitious. The Drive reports that US Northern Command recently completed a string of tests for Global Information Dominance Experiments (GIDE), a combination of AI, cloud computing and sensors that could give the Pentagon the ability to predict events “days in advance,” according to Command leader General Glen VanHerck. It’s not as mystical as it sounds, but it could lead to a major change in military and government operations.
The machine learning-based system observes changes in raw, real-time data that hint at possible trouble. If satellite imagery shows signs that a rival nation’s submarine is preparing to leave port, for instance, the AI could flag that mobilization knowing the vessel will likely leave soon. Military analysts can take hours or even days to comb through this information — GIDE technology could send an alert within “seconds,” VanHerck said.
The most recent dry run, GIDE 3, was the most expansive yet. It saw all 11 US commands and the broader Defense Department use a mix of military and civilian sensors to address scenarios where “contested logistics” (such as communications in the Panama Canal) might pose a problem. The technology involved wasn’t strictly new, the General said, but the military “stitched everything together.”
DeepMind’s Vibrant New Virtual World Trains Flexible AI With Endless Play
The paper’s authors said they’ve created an endlessly challenging virtual playground for AI. The world, called XLand, is a vibrant video game managed by an AI overlord and populated by algorithms that must learn the skills to navigate it.
The game-managing AI keeps an eye on what the game-playing algorithms are learning and automatically generates new worlds, games, and tasks to continuously confront them with new experiences.
The team said some veteran algorithms faced 3.4 million unique tasks while playing around 700000 games in 4000 XLand worlds. But most notably, they developed a general skillset not related to any one game, but useful in all of them.

Sergey Young: breaking the barrier of maximum lifespan
The news we like: “In five to 10 years time from now, we’ll have a new, special kind of drugs: longevity drugs. And unlike today’s medication, which always focused on one disease, this kind of drug will will give us an opportunity to influence aging as a whole and a very fatalistic way, working on healthspan, not only on lifespan… it’s very likely that this new drug will be developed with the help of artificial intelligence, which will compress drug development cycle by two or three times from what they are today.”
Ahead of the launch of his new book Growing Young, Sergey Young joins us for a video interview to discuss longevity horizons, personal health strategies and disruptive tech – and how we are moving towards radically extending our lifespan and healthspan.
Sergey Young, the longevity investor and founder of the Longevity Vision Fund is on a mission to extend healthy lifespans of at least one billion people. His new book, Growing Young, is released on 24th August and is already rising up the Amazon charts.
“It’s been amazing three years journey,” Young told Longevity. Technology. “I spent hours and days in different labs in the best clinics in the world and best academic institutions. I even talked to Peter Jackson! I’m very excited to share with everyone, so every reader can start their longevity journey today.”

The Future of Deep Learning Is Photonic
Over the years, deep learning has required an ever-growing number of these multiply-and-accumulate operations. Consider LeNet, a pioneering deep neural network, designed to do image classification. In 1998 it was shown to outperform other machine techniques for recognizing handwritten letters and numerals. But by 2012 AlexNet, a neural network that crunched through about 1600 times as many multiply-and-accumulate operations as LeNet, was able to recognize thousands of different types of objects in images.
Advancing from LeNet’s initial success to AlexNet required almost 11 doublings of computing performance. During the 14 years that took, Moore’s law provided much of that increase. The challenge has been to keep this trend going now that Moore’s law is running out of steam. The usual solution is simply to throw more computing resources—along with time, money, and energy—at the problem.
As a result, training today’s large neural networks often has a significant environmental footprint. One 2019 study found, for example, that training a certain deep neural network for natural-language processing produced five times the CO2 emissions typically associated with driving an automobile over its lifetime.