Dec 28, 2023
From Language to Consciousness (Guest: Joscha Bach)
Posted by Dan Breeden in categories: futurism, neuroscience
Guest lecture by Joscha Bach on the past and future of language models.
Guest lecture by Joscha Bach on the past and future of language models.
Want to stream more content like this… and 1,000’s of courses, documentaries & more?
👉 👉 Start Your Free Trial of Wondrium https://tinyurl.com/jhj7xbxd 👈 👈
Continue reading “What Is Time? | Professor Sean Carroll Explains Presentism and Eternalism” »
An interview with J. Storrs Hall, author of the epic book “Where is My Flying Car — A Memoir of Future Past”: “The book starts as an examination of the technical limitations of building flying cars and evolves into an investigation of the scientific, technological, and social roots of the economic…
J. Storrs Hall or Josh is an independent researcher and author.
Physicist Federico Faggin is none other than the inventor of both the microprocessor and silicon gate technology, which spawned the explosive progress in computer technology we have witnessed over the past five decades. He is also probably the world’s most well rounded idealist alive. Mr. Faggin approaches idealism from both a deeply technical and a deeply personal, experiential perspective. In this interview, Essentia Foundation’s Natalia Vorontsova engages in an open, free-ranging but very accessible conversation with Mr. Faggin.
Copyright © 2022 by Essentia Foundation. All rights reserved.
https://www.essentiafoundation.org.
The famous red giant has behaved oddly in recent years and astronomers now believe the end is close.
PBS Member Stations rely on viewers like you. To support your local station, go to: http://to.pbs.org/DonateSPACE↓ More info below ↓Is all that exists just w…
Last year, I edhost a thrilling conversation between @SabineHossenfelder, Carlo Rovelli, and Eric Weinstein as they debate quantum physics, consciousness and the mystery of reality. \
\
IAI Live is a monthly event featuring debates, talks, interviews, documentaries and music. LIVE. \
Watch the original video \
\
See the world’s leading thinkers debate the big questions for real, LIVE in London. Tickets: https://howthelightgetsin.org/\
\
To discover more talks, debates, interviews and academies with the world’s leading speakers visit https://iai.tv/\
\
We imagine physics is objective. But quantum physics found the act of human observation changes the outcome of experiment. Many scientists assume this central role of the observer is limited to just quantum physics. But is this an error? As Heisenberg puts it, \.
Puzzling ancient galaxies and oddly shaped clusters suggest we have glimpsed cosmic strings travelling at the speed of light – and with them clues to a deeper theory of reality.
By Dan Falk
Advancements in deep learning have influenced a wide variety of scientific and industrial applications in artificial intelligence. Natural language processing, conversational AI, time series analysis, and indirect sequential formats (such as pictures and graphs) are common examples of the complicated sequential data processing jobs involved in these. Recurrent Neural Networks (RNNs) and Transformers are the most common methods; each has advantages and disadvantages. RNNs have a lower memory requirement, especially when dealing with lengthy sequences. However, they can’t scale because of issues like the vanishing gradient problem and training-related non-parallelizability in the time dimension.
As an effective substitute, transformers can handle short-and long-term dependencies and enable parallelized training. In natural language processing, models like GPT-3, ChatGPT LLaMA, and Chinchilla demonstrate the power of Transformers. With its quadratic complexity, the self-attention mechanism is computationally and memory-expensive, making it unsuitable for tasks with limited resources and lengthy sequences.
A group of researchers addressed these issues by introducing the Acceptance Weighted Key Value (RWKV) model, which combines the best features of RNNs and Transformers while avoiding their major shortcomings. While preserving the expressive qualities of the Transformer, like parallelized training and robust scalability, RWKV eliminates memory bottleneck and quadratic scaling that are common with Transformers. It does this with efficient linear scaling.
Published 12 August 2020 • © 2020 The Author(s). Published by IOP Publishing Ltd Journal of Physics: Photonics, Volume 2, Number 4 Focus on Photonics for Neural Information Processing Citation Matěj Hejda et al 2020 J. Phys. Photonics 2 044001 DOI 10.1088/2515–7647/aba670