Toggle light / dark theme

Speaking the same language: How artificial neurons mimic biological neurons

Artificial intelligence has long been a hot topic: a computer algorithm “learns” by being taught by examples: What is “right” and what is “wrong.” Unlike a computer algorithm, the human brain works with neurons—cells of the brain. These are trained and pass on signals to other neurons. This complex network of neurons and the connecting pathways, the synapses, controls our thoughts and actions.

Biological signals are much more diverse when compared with those in conventional computers. For instance, neurons in a biological neural network communicate with ions, biomolecules and neurotransmitters. More specifically, neurons communicate either chemically—by emitting the messenger substances such as neurotransmitters—or via , so-called “action potentials” or “spikes”.

Artificial neurons are a current area of research. Here, the efficient communication between the biology and electronics requires the realization of that emulate realistically the function of their biological counterparts. This means artificial neurons capable of processing the diversity of signals that exist in biology. Until now, most artificial neurons only emulate their biological counterparts electrically, without taking into account the wet biological environment that consists of ions, biomolecules and neurotransmitters.

Materials Made of Mechanical Neural Networks Can Learn to Adapt Their Physical Properties

A new type of material can learn and improve its ability to deal with unexpected forces thanks to a unique lattice structure with connections of variable stiffness, as described in a new paper by my colleagues and me.

The new material is a type of architected material, which gets its properties mainly from the geometry and specific traits of its design rather than what it is made out of. Take hook-and-loop fabric closures like Velcro, for example. It doesn’t matter whether it is made from cotton, plastic or any other substance. As long as one side is a fabric with stiff hooks and the other side has fluffy loops, the material will have the sticky properties of Velcro.

My colleagues and I based our new material’s architecture on that of an artificial neural network—layers of interconnected nodes that can learn to do tasks by changing how much importance, or weight, they place on each connection. We hypothesized that a mechanical lattice with physical nodes could be trained to take on certain mechanical properties by adjusting each connection’s rigidity.

How Humans and Artificial Intelligence are the Same

Scientists have found new similarities between Human Brains and current Artificial Intelligence models which clearly show that in general, there’s not a lot of things needed except for more and better hardware, and some more improvements in efficiency for Artificial Intelligence to beat Humans in nearly every imaginable field.

TIMESTAMPS:
00:00 More Similar than not… 01:37 How AI and us perceive Time 04:19 What is Artificial General Intelligence 05:57 When can we expect AGI? 07:12 Last Words — #ai #agi #humans …
01:37 How AI and us perceive Time.
04:19 What is Artificial General Intelligence.
05:57 When can we expect AGI?
07:12 Last Words.

#ai #agi #humans

Digital Doubles and Second Selves

This time I come to talk about a new concept in this Age of Artificial Intelligence and the already insipid world of Social Networks. Initially, quite a few years ago, I named it “Counterpart” (long before the TV series “Counterpart” and “Black Mirror”, or even the movie “Transcendence”).

It was the essence of the ETER9 Project that was taking shape in my head.

Over the years and also with the evolution of technologies — and of the human being himself —, the concept “Counterpart” has been getting better and, with each passing day, it makes more sense!

Imagine a purely digital receptacle with the basics inside, like that Intermediate Software (BIOS(1)) that computers have between the Hardware and the Operating System. That receptacle waits for you. One way or another, it waits patiently for you, as if waiting for a Soul to come alive in the ether of digital existence.

‘World’s most advanced’ robot will have working legs within a year

With her eerily realistic facial expressions and movements, Ameca has been billed as the world’s most advanced humanoid robot.

But if that wasn’t impressive enough, soon she could be walking around too.

That’s because the robot — which has been designed by British company Engineered Arts — has revealed that engineers are working on ‘prototype legs’ which should be ready within the next year.

Global AI Ethics Agreement Commits Universities to Human-Centered AI

A new global agreement has been established by eight worldwide universities to commit to the development of human-centered approaches to artificial intelligence (AI). The newest university to join the agreement, which could impact people all across the globe, was the University of Florida (UF).

The Global University Summit was held back on October 27 at Notre Dame University. Joseph Glover, UF provost and senior vice president of academic affairs, signed The Rome Call for AI Ethics on behalf of the University of Florida. He also served as a panelist for the two-day summit, which was attended by 36 universities from around the world.

Ensuring Human-Centered Principles

Kyle Meredith with… Mark Rylance and Trudie Styler

Mark Rylance & Trudie Styler on AI, Singularity, Othering, Empathy, and Evolution.

Actor Mark Rylance and director/activist Trudie Styler sit down with Kyle Meredith to talk about Spark Hunter, a new audio drama about the world’s most advanced AI having dinner with her maker over a philosophical discussion to determine if she represents a new hope for the world, or its destruction. The two discuss how the Dalai Lama brought them together, what it means to have standing inside the laws of nature, and the points of othering, racism, empathy, and evolution that the spy-drama digs into. Rylance, who also starred in Ready Player One and Don’t Look Up, and Styler also consider if we’ll ever see singularity or if robots should even aspire to be more human, especially considering folks like Elon Musk and his actions.

Midjourney just got a big update, and it’s BETTER than DALL-E 2

Midjourney just got an update, Midjourney V4 and it’s BETTER then DALL-E 2?! Today we compare these two AI text to image AI art generators and find out.

▼ Link(s) From Today’s Video:

✩ Gilbatree’s Video (ft. me!) https://www.youtube.com/watch?v=EadcJIz-E48&t=0s&ab_channel=Glibatree.

✩ Midjourney: https://www.midjourney.com/home/

✩ DALL-E 2: https://labs.openai.com/

► MattVidPro Website: https://MattVidPro.com.

Why Neural Networks can learn (almost) anything

A video about neural networks, how they work, and why they’re useful.

My twitter: https://twitter.com/max_romana.

SOURCES
Neural network playground: https://playground.tensorflow.org/

Universal Function Approximation:
Proof: https://cognitivemedium.com/magic_paper/assets/Hornik.pdf.
Covering ReLUs: https://proceedings.neurips.cc/paper/2017/hash/32cbf687880eb…tract.html.
Covering discontinuous functions: https://arxiv.org/pdf/2012.03016.pdf.

Turing Completeness:
Networks of infinite size are turing complete: Neural Computability I & II (behind a paywall unfourtunately, but is cited in following paper)
RNNs are turing complete: https://binds.cs.umass.edu/papers/1992_Siegelmann_COLT.pdf.
Transformers are turing complete: https://arxiv.org/abs/2103.

More on backpropagation: