Menu

Blog

Nov 7, 2022

Why Neural Networks can learn (almost) anything

Posted by in category: robotics/AI

A video about neural networks, how they work, and why they’re useful.

My twitter: https://twitter.com/max_romana.

SOURCES
Neural network playground: https://playground.tensorflow.org/

Universal Function Approximation:
Proof: https://cognitivemedium.com/magic_paper/assets/Hornik.pdf.
Covering ReLUs: https://proceedings.neurips.cc/paper/2017/hash/32cbf687880eb…tract.html.
Covering discontinuous functions: https://arxiv.org/pdf/2012.03016.pdf.

Turing Completeness:
Networks of infinite size are turing complete: Neural Computability I & II (behind a paywall unfourtunately, but is cited in following paper)
RNNs are turing complete: https://binds.cs.umass.edu/papers/1992_Siegelmann_COLT.pdf.
Transformers are turing complete: https://arxiv.org/abs/2103.

More on backpropagation:

Comments are closed.