Category: robotics/AI – Page 1,375
Finance Ministry + Israel Innovation Authority (IIA) to test AI floating system that generates electricity by tracking the sun.
The human brain is often described in the language of tipping points: It toes a careful line between high and low activity, between dense and sparse networks, between order and disorder. Now, by analyzing firing patterns from a record number of neurons, researchers have uncovered yet another tipping point — this time, in the neural code, the mathematical relationship between incoming sensory information and the brain’s neural representation of that information. Their findings, published in Nature in June, suggest that the brain strikes a balance between encoding as much information as possible and responding flexibly to noise, which allows it to prioritize the most significant features of a stimulus rather than endlessly cataloging smaller details. The way it accomplishes this feat could offer fresh insights into how artificial intelligence systems might work, too.
A balancing act is not what the scientists initially set out to find. Their work began with a simpler question: Does the visual cortex represent various stimuli with many different response patterns, or does it use similar patterns over and over again? Researchers refer to the neural activity in the latter scenario as low-dimensional: The neural code associated with it would have a very limited vocabulary, but it would also be resilient to small perturbations in sensory inputs. Imagine a one-dimensional code in which a stimulus is simply represented as either good or bad. The amount of firing by individual neurons might vary with the input, but the neurons as a population would be highly correlated, their firing patterns always either increasing or decreasing together in the same overall arrangement. Even if some neurons misfired, a stimulus would most likely still get correctly labeled.
At the other extreme, high-dimensional neural activity is far less correlated. Since information can be graphed or distributed across many dimensions, not just along a few axes like “good-bad,” the system can encode far more detail about a stimulus. The trade-off is that there’s less redundancy in such a system — you can’t deduce the overall state from any individual value — which makes it easier for the system to get thrown off.
Top 20 Awesome Robot Animals
Posted in robotics/AI
Hello 👋 guys check out our newest video #()
#destrorobotics Please don’t forget to like 👍and subscribe to our channel. #Empoweringtheworldthroughrobotics.
Support us through paypal: https://www.paypal.me/destrorobotics.
Video link👇:
In this video we will be looking at the top 20 awesome robot animals that will blow your mind please 🙏 subscribe to my youtube channel👌 https://youtube.com/channel/UCK9neHq3aT9dqN_ossHhLew.
People seem to be continually surprised, over and over again, by the new capabilities of big machine learning models, such as PaLM, DALL-E, Chinchilla, SayCan, Socratic Models, Flamingo, and Gato (all in the last two months!). Luckily, there is a famous paper on how AI progress is governed by scaling laws, where models predictably get better as they get larger. Could we forecast AI progress ahead of time by seeing how each task gets better with model size, draw out the curve, and calculate which size model is needed to reach human performance?
At DeepMind, we’re embarking on one of the greatest adventures in scientific history. Our mission is to solve intelligence, to advance science and benefit humanity.
To make this possible, we bring together scientists, designers, engineers, ethicists, and more, to research and build safe artificial intelligence systems that can help transform society for the better.
By combining creative thinking with our dedicated, scientific approach, we’re unlocking new ways of solving complex problems and working to develop a more general and capable problem-solving system, known as artificial general intelligence (AGI). Guided by safety and ethics, this invention could help society find answers to some of the most important challenges facing society today.
We regularly partner with academia and nonprofit organisations, and our technologies are used across Google devices by millions of people every day. From solving a 50-year-old grand challenge in biology with AlphaFold and synthesising voices with WaveNet, to mastering complex games with AlphaZero and preserving wildlife in the Serengeti, our novel advances make a positive and lasting impact.
Incredible ideas thrive when diverse people join together. With headquarters in London and research labs in Paris, New York, Montreal, Edmonton, and Mountain View, CA, we’re always looking for great people from all walks of life to join our mission.
#LifeAtDeepMind #artificialintelligence #AGI #socialimpact
Robot dog may get to go to the moon
Posted in mapping, robotics/AI, space
The robotic explorer GLIMPSE, created at ETH Zurich and the University of Zurich, has made it into the final round of a competition for prospecting resources in space. The long-term goal is for the robot to explore the south polar region of the moon.
The south polar region of the moon is believed to contain many resources that would be useful for lunar base operations, such as metals, water in the form of ice, and oxygen stored in rocks. But to find them, an explorer robot that can withstand the extreme conditions of this part of the moon is needed. Numerous craters make moving around difficult, while the low angle of the sunlight and thick layers of dust impede the use of light-based measuring instruments. Strong fluctuations in temperature pose a further challenge.
The European Space Agency (ESA) and the European Space Resources Innovation Center ESRIC called on European and Canadian engineering teams to develop robots and tools capable of mapping and prospecting the shadowy south polar region of the moon, between the Shoemaker and the Faustini craters. To do this, the researchers had to adapt terrestrial exploration technologies for the harsh conditions on the moon.
Deep learning models have proved to be highly promising tools for analyzing large numbers of images. Over the past decade or so, they have thus been introduced in a variety of settings, including research laboratories.
In the field of biology, deep learning models could potentially facilitate the quantitative analysis of microscopy images, allowing researchers to extract meaningful information from these images and interpret their observations. Training models to do this, however, can be very challenging, as it often requires the extraction of features (i.e., number of cells, area of cells, etc.) from microscopy images and the manual annotation of training data.
Researchers at CERVO Brain Research Center, the Institute for Intelligence and Data, and Université Laval in Canada have recently developed an artificial neural network that could perform in-depth analyses of microscopy images using simpler, image-level annotations. This model, dubbed MICRA-Net (MICRoscopy Analysis neural network), was introduced in a paper published in Nature Machine Intelligence.
A team of international scientists have performed difficult machine learning computations using a nano-scale device, named an “optomemristor.”
The chalcogenide thin-film device uses both light and electrical signals to interact and emulate multi-factor biological computations of the mammalian brain while consuming very little energy.
To date, research on hardware for artificial intelligence and machine learning applications has concentrated mainly on developing electronic or photonic synapses and neurons, and combining these to carry out basic forms of neural-type processing.
Machine learning techniques are designed to mathematically emulate the functions and structure of neurons and neural networks in the brain. However, biological neurons are very complex, which makes artificially replicating them particularly challenging.
Researchers at Korea University have recently tried to reproduce the complexity of biological neurons more effectively by approximating the function of individual neurons and synapses. Their paper, published in Nature Machine Intelligence, introduces a network of evolvable neural units (ENUs) that can adapt to mimic specific neurons and mechanisms of synaptic plasticity.
“The inspiration for our paper comes from the observation of the complexity of biological neurons, and the fact that it seems almost impossible to model all of that complexity produced by nature mathematically,” Paul Bertens, one of the researchers who carried out the study, told TechXplore. “Current artificial neural networks used in deep learning are very powerful in many ways, but they do not really match biological neural network behavior. Our idea was to use these existing artificial neural networks not to model the entire brain, but to model each individual neuron and synapse.”