Toggle light / dark theme

When the theoretical physicist Leonard Susskind encountered a head-scratching paradox about black holes, he turned to an unexpected place: computer science. In nature, most self-contained systems eventually reach thermodynamic equilibrium… but not black holes. The interior volume of a black hole appears to forever expand without limit. But why? Susskind had a suspicion that a concept called computational complexity, which underpins everything from cryptography to quantum computing to the blockchain and AI, might provide an explanation.

He and his colleagues believe that the complexity of quantum entanglement continues to evolve inside a black hole long past the point of what’s called “heat death.” Now Susskind and his collaborator, Adam Brown, have used this insight to propose a new law of physics: the second law of quantum complexity, a quantum analogue of the second law of thermodynamics.

Also appearing in the video: Xie Chen of CalTech, Adam Bouland of Stanford and Umesh Vazirani of UC Berkeley.

00:00 Intro to a second law of quantum complexity.

Artificial Intelligence is our best bet to understand the nature of our mind, and how it can exist in this universe. \r\
\
Joscha Bach, Ph.D. is an AI researcher who worked and published about cognitive architectures, mental representation, emotion, social modeling, and multi-agent systems. He earned his Ph.D. in cognitive science from the University of Osnabrück, Germany. He is especially interested in the philosophy of AI, and in using computational models and conceptual tools to understand our minds and what makes us human.\r\
Joscha has taught computer science, AI, and cognitive science at the Humboldt-University of Berlin, the Institute for Cognitive Science at Osnabrück, and the MIT Media Lab, and authored the book “Principles of Synthetic Intelligence” (Oxford University Press).\
\
This talk was given at a TEDx event using the TED conference format but independently organized by a local community.

An interview with J. Storrs Hall, author of the epic book “Where is My Flying Car — A Memoir of Future Past”: “The book starts as an examination of the technical limitations of building flying cars and evolves into an investigation of the scientific, technological, and social roots of the economic…


J. Storrs Hall or Josh is an independent researcher and author.

He was the founding Chief Scientist of Nanorex, which is developing a CAD system for nanomechanical engineering.

Advancements in deep learning have influenced a wide variety of scientific and industrial applications in artificial intelligence. Natural language processing, conversational AI, time series analysis, and indirect sequential formats (such as pictures and graphs) are common examples of the complicated sequential data processing jobs involved in these. Recurrent Neural Networks (RNNs) and Transformers are the most common methods; each has advantages and disadvantages. RNNs have a lower memory requirement, especially when dealing with lengthy sequences. However, they can’t scale because of issues like the vanishing gradient problem and training-related non-parallelizability in the time dimension.

As an effective substitute, transformers can handle short-and long-term dependencies and enable parallelized training. In natural language processing, models like GPT-3, ChatGPT LLaMA, and Chinchilla demonstrate the power of Transformers. With its quadratic complexity, the self-attention mechanism is computationally and memory-expensive, making it unsuitable for tasks with limited resources and lengthy sequences.

A group of researchers addressed these issues by introducing the Acceptance Weighted Key Value (RWKV) model, which combines the best features of RNNs and Transformers while avoiding their major shortcomings. While preserving the expressive qualities of the Transformer, like parallelized training and robust scalability, RWKV eliminates memory bottleneck and quadratic scaling that are common with Transformers. It does this with efficient linear scaling.

A central challenge for systems neuroscience and artificial intelligence is to understand how cognitive behaviors arise from large, highly interconnected networks of neurons. Digital simulation is linking cognitive behavior to neural activity to bridge this gap in our understanding at great expense in time and electricity. A hybrid analog-digital approach, whereby slow analog circuits, operating in parallel, emulate graded integration of synaptic currents by dendrites while a fast digital bus, operating serially, emulates all-or-none transmission of action potentials by axons, may improve simulation efficacy. Due to the latter’s serial operation, this approach has not scaled beyond millions of synaptic connections (per bus). This limit was broken by following design principles the neocortex uses to minimize its wiring. The resulting hybrid analog-digital platform, Neurogrid, scales to billions of synaptic connections, between up to a million neurons, and simulates cortical models in real-time using a few watts of electricity. Here, we demonstrate that Neurogrid simulates cortical models spanning five levels of experimental investigation: biophysical, dendritic, neuronal, columnar, and area. Bridging these five levels with Neurogrid revealed a novel way active dendrites could mediate top-down attention.

K.B. and N.N.O. are co-founders and equity owners of Femtosense Inc.

As much as id love to see it. Not until someone solves human level hands, i believe will cost about 10+ billion USD. And, a battery can run 8 to 12 hours, and be changed or re charged in under 15 minutes.


HOUSTON/AUSTIN, Texas, Dec 27 (Reuters) — Standing at 6 feet 2 inches (188 centimeters) tall and weighing 300 pounds (136 kilograms), NASA’s humanoid robot Valkyrie is an imposing figure.

Valkyrie, named after a female figure in Norse mythology and being tested at the Johnson Space Center in Houston, Texas, is designed to operate in “degraded or damaged human-engineered environments,” like areas hit by natural disasters, according to NASA.

But robots like her could also one day operate in space.

Kmele steps inside Fermilab, America’s premiere particle accelerator facility, to find out how the smallest particles in the universe can teach us about its biggest mysteries.\
\
This video is an episode from @The-Well, our publication about ideas that inspire a life well-lived, created with the @JohnTempletonFoundation.\
\

Watch the full podcast now ► • Dispatches from The Well \
\
According to Fermilab’s Bonnie Flemming, the pursuit of scientific understanding is “daunting in an inspiring way.” What makes it daunting? The seemingly infinite number of questions, with their potentially inaccessible answers.\
\
In this episode of Dispatches from The Well, host Kmele Foster tours the grounds of America’s legendary particle accelerator to discover how exploring the mysteries at the heart of particle physics help us better understand some of the most profound mysteries of our universe.\
\
Read the video transcript ► https://bigthink.com/the-well/dispatc…\
\
00:00:00 — The Miracle of Birth\
00:04:48 — Exploring the Universe’s Mysteries\
00:09:20 — Building Blocks of Matter and the Standard Model\
00:13:35 — The Evolving Body of Knowledge\
00:17:39 — Understanding the Early Universe\
00:22:05 — Reflections on Particle Physics\
00:25:34 — The Extraordinary Effort to Understand the Small\
00:29:59 — From Paleontology to Astrophysics\
00:33:40 — The Importance of the Scientific Method and Being Critical\
\
\
About Kmele Foster:\
\
Kmele Foster is a media entrepreneur, commentator, and regular contributor to various national publications. He is the co-founder and co-host of The Fifth Column, a popular media criticism podcast.\
\
He is the head of content at Founders Fund, a San Francisco based venture capital firm investing in companies building revolutionary technologies, and a partner at Freethink, a digital media company focused on the people and ideas changing our world.\
\
Kmele also serves on the Board of Directors of the Foundation for Individual Rights and Expression (FIRE).\
\
\
Read more from The Well: \
Actually, neuroscience suggests “the self” is real\
https://bigthink.com/the-well/actuall…\
Mary Shelley’s Frankenstein can illuminate the debate over generative AI\
https://bigthink.com/the-well/mary-sh…\
Few of us desire true equality. It’s time to own up to it\
https://bigthink.com/the-well/few-des…\
\
\
About The Well\
Do we inhabit a multiverse? Do we have free will? What is love? Is evolution directional? There are no simple answers to life’s biggest questions, and that’s why they’re the questions occupying the world’s brightest minds.\
\
Together, let’s learn from them.\
\

\
\
Join The Well on your favorite platforms:\
► Facebook: https://bit.ly/thewellFB \
► Instagram: https://bit.ly/thewellIG

The science of predicting chaotic systems lies at the intriguing intersection of physics and computer science. This field delves into understanding and forecasting the unpredictable nature of systems where small initial changes can lead to significantly divergent outcomes. It’s a realm where the butterfly effect reigns supreme, challenging the traditional notions of predictability and order.

Central to the challenge in this domain is the unpredictability inherent in chaotic systems. Forecasting these systems is complex due to their sensitive dependence on initial conditions, making long-term predictions highly challenging. Researchers strive to find methods that can accurately anticipate the future states of such systems despite the inherent unpredictability.

Prior approaches in chaotic system prediction have largely centered around domain-specific and physics-based models. These models, informed by an understanding of the underlying physical processes, have been the traditional tools for tackling the complexities of chaotic systems. However, their effectiveness is often limited by the intricate nature of the systems they attempt to predict.