Toggle light / dark theme

People seem to be continually surprised, over and over again, by the new capabilities of big machine learning models, such as PaLM, DALL-E, Chinchilla, SayCan, Socratic Models, Flamingo, and Gato (all in the last two months!). Luckily, there is a famous paper on how AI progress is governed by scaling laws, where models predictably get better as they get larger. Could we forecast AI progress ahead of time by seeing how each task gets better with model size, draw out the curve, and calculate which size model is needed to reach human performance?

At DeepMind, we’re embarking on one of the greatest adventures in scientific history. Our mission is to solve intelligence, to advance science and benefit humanity.

To make this possible, we bring together scientists, designers, engineers, ethicists, and more, to research and build safe artificial intelligence systems that can help transform society for the better.

By combining creative thinking with our dedicated, scientific approach, we’re unlocking new ways of solving complex problems and working to develop a more general and capable problem-solving system, known as artificial general intelligence (AGI). Guided by safety and ethics, this invention could help society find answers to some of the most important challenges facing society today.

We regularly partner with academia and nonprofit organisations, and our technologies are used across Google devices by millions of people every day. From solving a 50-year-old grand challenge in biology with AlphaFold and synthesising voices with WaveNet, to mastering complex games with AlphaZero and preserving wildlife in the Serengeti, our novel advances make a positive and lasting impact.

The robotic explorer GLIMPSE, created at ETH Zurich and the University of Zurich, has made it into the final round of a competition for prospecting resources in space. The long-term goal is for the robot to explore the south polar region of the moon.

The south polar region of the moon is believed to contain many resources that would be useful for lunar base operations, such as metals, water in the form of ice, and oxygen stored in rocks. But to find them, an explorer robot that can withstand the extreme conditions of this part of the moon is needed. Numerous craters make moving around difficult, while the low angle of the sunlight and thick layers of dust impede the use of light-based measuring instruments. Strong fluctuations in temperature pose a further challenge.

The European Space Agency (ESA) and the European Space Resources Innovation Center ESRIC called on European and Canadian engineering teams to develop robots and tools capable of mapping and prospecting the shadowy south polar region of the moon, between the Shoemaker and the Faustini craters. To do this, the researchers had to adapt terrestrial exploration technologies for the harsh conditions on the moon.

Deep learning models have proved to be highly promising tools for analyzing large numbers of images. Over the past decade or so, they have thus been introduced in a variety of settings, including research laboratories.

In the field of biology, could potentially facilitate the quantitative analysis of microscopy images, allowing researchers to extract meaningful information from these images and interpret their observations. Training models to do this, however, can be very challenging, as it often requires the extraction of features (i.e., number of cells, area of cells, etc.) from microscopy images and the manual of training data.

Researchers at CERVO Brain Research Center, the Institute for Intelligence and Data, and Université Laval in Canada have recently developed an that could perform in-depth analyses of microscopy images using simpler, image-level annotations. This model, dubbed MICRA-Net (MICRoscopy Analysis ), was introduced in a paper published in Nature Machine Intelligence.

A team of international scientists have performed difficult machine learning computations using a nano-scale device, named an “optomemristor.”

The chalcogenide thin-film device uses both light and to interact and emulate multi-factor biological computations of the mammalian brain while consuming very little energy.

To date, research on hardware for and machine learning applications has concentrated mainly on developing electronic or photonic synapses and neurons, and combining these to carry out basic forms of neural-type processing.

Machine learning techniques are designed to mathematically emulate the functions and structure of neurons and neural networks in the brain. However, biological neurons are very complex, which makes artificially replicating them particularly challenging.

Researchers at Korea University have recently tried to reproduce the complexity of biological neurons more effectively by approximating the function of individual neurons and synapses. Their paper, published in Nature Machine Intelligence, introduces a of evolvable neural units (ENUs) that can adapt to mimic specific neurons and mechanisms of synaptic plasticity.

“The inspiration for our paper comes from the observation of the complexity of biological neurons, and the fact that it seems almost impossible to model all of that complexity produced by nature mathematically,” Paul Bertens, one of the researchers who carried out the study, told TechXplore. “Current artificial used in deep learning are very powerful in many ways, but they do not really match biological neural network behavior. Our idea was to use these existing artificial neural networks not to model the entire , but to model each individual neuron and synapse.”

Evolution, the process by which living organisms adapt to their surrounding environment over time, has been widely studied over the years. As first hypothesized by Darwin in the mid 1800s, research evidence suggests that most biological species, including humans, continuously adapt to new environmental circumstances and that this ultimately enables their survival.

In recent years, researchers have been developing advanced computational techniques based on artificial neural networks, which are architectures inspired by in the . Models based on artificial neural networks are trained to optimize millions of synaptic weights over millions of observations in order to make accurate predictions or classify data.

Researchers at Princeton University have recently carried out a study investigating the similarities and differences between artificial and biological neural networks from an evolutionary standpoint. Their paper, published in Neuron, compares the evolution of biological neural networks with that of artificial ones using psychology theory.

A research group from Politecnico di Milano has developed a new computing circuit that can execute advanced operations, typical of neural networks for artificial intelligence, in one single operation.

The circuit performance in terms of speed and paves the way for a new generation of computing accelerators that are more energy efficient and more sustainable on a global scale. The study has been recently published in the prestigious Science Advances.

Recognizing a face or an object, or correctly interpreting a word or a musical tune are operations that are today possible on the most common electronic gadgets, such as smartphones and tablets, thanks to artificial intelligence. For this to happen, complicated neural networks needs to be appropriately trained, which is so energetically demanding that, according to some studies, the that derives from the training of a complex can equal the emission of 5 cars throughout their whole life cycle.

Robin Murphy, a roboticist at Texas A&M University has published a Focus piece in the journal Science Robotics outlining her views on the robots portrayed in “Star Wars,” most particularly those featured in “The Mandalorian” and “The Book of Boba Fett.” In her article, she says she believes that the portrayals of robots in both movies are quite creative, but suggests they are not wild enough to compete with robots that are made and used in the real world today.

Murphy begins by noting that one in particular, IG-11 in the Mandalorian, makes for good viewing with a rotating head that allows for shooting at targets in any direction, but she also notes that such a robot would very likely be overly susceptible to joint failure and would be saddled with huge computational demands. She suggests a more practical design would involve the use of fixed-array sensors.

Murphy also notes that robots in “Star Wars” do fail on occasion, generally during suspenseful scenes, which she further notes might explain why the empire met with its demise. As just one example, she wonders why the stormtroopers so often miss their targets. She also notes that in some ways, droids in “Star Wars” movies tend to be far more advanced than droids in the real world, allowing them to hold human-like jobs such as bartending, teaching or translating. In so doing, she points out, producers of the movies have shied away from showing them doing more mundane work, like mining.