Toggle light / dark theme

✅ Instagram: https://www.instagram.com/pro_robots.

You are on the PRO Robots channel and in this video we will talk about artificial intelligence. Repeating brain structure, mutual understanding and mutual assistance, self-learning and rethinking of biological life forms, replacing people in various jobs and cheating. What have neural networks learned lately? All new skills and superpowers of artificial intelligence-based systems in one video!

0:00 In this video.
0:26 Isomorphic Labs.
1:14 Artificial intelligence trains robots.
2:01 MIT researchers’ algorithm teaches robots social skills.
2:45 AI adopts brain structure.
3:28 Revealing cause and effect relationships.
4:40 Miami Herald replaces fired journalist with bot.
5:26 Nvidia unveiled a neural network that creates animated 3D face models based on voice.
5:55 Sber presented code generation model based on ruGPT-3 neural network.
6:50 ruDALL-E multimodal neural network.
7:16 Cristofari Neo supercomputer for neural network training.

#prorobots #robots #robot #future technologies #robotics.

I continue to introduce you to a series of articles on the nature of human intelligence and the future of artificial intelligence systems. In the previous article “Artificial intelligence vs neurophysiology: Why the difference matters” we found out that the basis of the work of any biological nervous system is not a computational function (like in a computer), but a reflex or a prepared answer.

But how then did our intelligence come about? How did a biological system repeating pre-prepared reactions become a powerful creative machine?

In this article, we will answer this question in the language of facts. Creating our intelligence, nature has found a simple and at the same time ingenious solution, which is not devoid of a great mystery, which we will also touch.

Artificial neural networks are famously inspired by their biological counterparts. Yet compared to human brains, these algorithms are highly simplified, even “cartoonish.”

Can they teach us anything about how the brain works?

For a panel at the Society for Neuroscience annual meeting this month, the answer is yes. Deep learning wasn’t meant to model the brain. In fact, it contains elements that are biologically improbable, if not utterly impossible. But that’s not the point, argues the panel. By studying how deep learning algorithms perform, we can distill high-level theories for the brain’s processes—inspirations to be further tested in the lab.

sustained technological superiority, and asset security and repair for current and future operations.

To meet this unique challenge, DARPA announced it is taking an initial step to explore and de-risk manufacturing capabilities that leverage biological processes in resource limited environments with its Biomanufacturing: Survival, Utility, and Reliability beyond Earth (B-SURE) program. https://www.darpa.mil/news-events/2021-11-22

Rarely does scientific software spark such sensational headlines. “One of biology’s biggest mysteries ‘largely solved’ by AI”, declared the BBC. Forbes called it “the most important achievement in AI — ever”. The buzz over the November 2020 debut of AlphaFold2, Google DeepMind’s (AI) system for predicting the 3D structure of proteins, has only intensified since the tool was made freely available in July.

The excitement relates to the software’s potential to solve one of biology’s thorniest problems — predicting the functional, folded structure of a protein molecule from its linear amino-acid sequence, right down to the position of each atom in 3D space. The underlying physicochemical rules for how proteins form their 3D structures remain too complicated for humans to parse, so this ‘protein-folding problem’ has remained unsolved for decades.

Researchers have worked out the structures of around 160,000 proteins from all kingdoms of life. They have been using experimental techniques, such as X-ray crystallography and cryo-electron microscopy (cryo-EM), and then depositing their 3D information in the Protein Data Bank. Computational biologists have made steady gains in developing software that complements these methods, and have correctly predicted the 3D shapes of some molecules from well-studied protein families.

Soot is one of the world’s worst contributors to climate change. Its impact is similar to global methane emissions and is second only to carbon dioxide in its destructive potential. This is because soot particles absorb solar radiation, which heats the surrounding atmosphere, resulting in warmer global temperatures. Soot also causes several other environmental and health problems including making us more susceptible to respiratory viruses.

Soot only persists in the atmosphere for a few weeks, suggesting that if these emissions could be stopped then the air could rapidly clear. This has recently been demonstrated during recent lockdowns, with some major cities reporting clear skies after industrial emissions stopped.

But is also part of our future. Soot can be converted into the useful carbon black product through thermal treatment to remove any harmful components. Carbon blacks are critical ingredients in batteries, tires and paint. If these carbons are made small enough they can even be made to fluoresce and have been used for tagging , in catalysts and even in solar cells.

Rich dynamics in a living neuronal system can be considered as a computational resource for physical reservoir computing (PRC). However, PRC that generates a coherent signal output from a spontaneously active neuronal system is still challenging. To overcome this difficulty, we here constructed a closed-loop experimental setup for PRC of a living neuronal culture, where neural activities were recorded with a microelectrode array and stimulated optically using caged compounds. The system was equipped with first-order reduced and controlled error learning to generate a coherent signal output from a living neuronal culture. Our embodiment experiments with a vehicle robot demonstrated that the coherent output served as a homeostasis-like property of the embodied system from which a maze-solving ability could be generated. Such a homeostatic property generated from the internal feedback loop in a system can play an important role in task solving in biological systems and enable the use of computational resources without any additional learning.

We explore Artificial Intelligence (AI) through Neuromorphic Computing with computer chips that emulate the biological neurons and synapses in the brain. Neuro-biological chip architectures enable machines to solve very different kinds of problems than traditional computers, the kinds of problems we previously thought only humans could tackle.

Our guest today is Kelsey Scharnhorst. Kelsey is an Artificial Neural Network Researcher at UCLA. Her research lab (Gimzewski Lab under James Gimzewski) is focused on creating neuromorphic computer chips and further developing their capabilities.

We’ll talk with Kelsey about how neuromorphic computing is different, how neural-biological computer architecture works, and how it will be used in the future.

Podcast version at: https://is.gd/MM_on_iTunes.