Menu

Blog

Page 255

Mar 19, 2024

Scientists Say There Could Be a ‘Mirror Universe’ Reflecting a Parallel Realm

Posted by in category: cosmology

Is this where dark matter is hiding in plain sight?

Mar 19, 2024

Secrets of Quantum Physics, “Einstein’s Nightmare” 4k

Posted by in categories: particle physics, quantum physics

Quantum physics starts with the 20th century as scientists try to understand light bulbs. This simple quest led scientists on a deep journey.

Professor Jim Al-Khalili reveals how Einstein thought he’d found a fatal flaw in quantum physics that implies that subatomic particles can communicate faster than light. The host of \.

Mar 19, 2024

Jensen Huang unveils new Nvidia super-chip before robots come onstage: ‘Everything that moves in the future will be robotic’

Posted by in categories: futurism, robotics/AI

Nvidia, the $2 trillion AI giant, is moving to lap the market once again.

Mar 19, 2024

Voyager 1 Breaks Silence: A Signal from the Depths of Space!

Posted by in category: space

In this thrilling episode, we dive into the heart of cosmic mystery as Voyager 1 sends back a groundbreaking signal after months of silence. Discover how NASA’s quick thinking and a simple \.

Mar 19, 2024

Natural language instructions induce compositional generalization in networks of neurons

Posted by in categories: biological, robotics/AI

In this study, we use the latest advances in natural language processing to build tractable models of the ability to interpret instructions to guide actions in novel settings and the ability to produce a description of a task once it has been learned. RNNs can learn to perform a set of psychophysical tasks simultaneously using a pretrained language transformer to embed a natural language instruction for the current task. Our best-performing models can leverage these embeddings to perform a brand-new model with an average performance of 83% correct. Instructed models that generalize performance do so by leveraging the shared compositional structure of instruction embeddings and task representations, such that an inference about the relations between practiced and novel instructions leads to a good inference about what sensorimotor transformation is required for the unseen task. Finally, we show a network can invert this information and provide a linguistic description for a task based only on the sensorimotor contingency it observes.

Our models make several predictions for what neural representations to expect in brain areas that integrate linguistic information in order to exert control over sensorimotor areas. Firstly, the CCGP analysis of our model hierarchy suggests that when humans must generalize across (or switch between) a set of related tasks based on instructions, the neural geometry observed among sensorimotor mappings should also be present in semantic representations of instructions. This prediction is well grounded in the existing experimental literature where multiple studies have observed the type of abstract structure we find in our sensorimotor-RNNs also exists in sensorimotor areas of biological brains3,36,37. Our models theorize that the emergence of an equivalent task-related structure in language areas is essential to instructed action in humans. One intriguing candidate for an area that may support such representations is the language selective subregion of the left inferior frontal gyrus. This area is sensitive to both lexico-semantic and syntactic aspects of sentence comprehension, is implicated in tasks that require semantic control and lies anatomically adjacent to another functional subregion of the left inferior frontal gyrus, which is implicated in flexible cognition38,39,40,41. We also predict that individual units involved in implementing sensorimotor mappings should modulate their tuning properties on a trial-by-trial basis according to the semantics of the input instructions, and that failure to modulate tuning in the expected way should lead to poor generalization. This prediction may be especially useful to interpret multiunit recordings in humans. Finally, given that grounding linguistic knowledge in the sensorimotor demands of the task set improved performance across models (Fig. 2e), we predict that during learning the highest level of the language processing hierarchy should likewise be shaped by the embodied processes that accompany linguistic inputs, for example, motor planning or affordance evaluation42.

One notable negative result of our study is the relatively poor generalization performance of GPTNET (XL), which used at least an order of magnitude more parameters than other models. This is particularly striking given that activity in these models is predictive of many behavioral and neural signatures of human language processing10,11. Given this, future imaging studies may be guided by the representations in both autoregressive models and our best-performing models to delineate a full gradient of brain areas involved in each stage of instruction following, from low-level next-word prediction to higher-level structured-sentence representations to the sensorimotor control that language informs.

Mar 19, 2024

Older Than Time? Speck of light glimpsed by Hubble is truly an enormous old galaxy, JWST reveals

Posted by in category: cosmology

Dive into the captivating story of Gz9p3, an ancient galaxy that’s challenging our understanding of the cosmos. Revealed by the James Webb Space Telescope, this galactic giant, observed just 510 million years after the Big Bang, is reshaping our views on early universe galactic formation. Join us as we explore the mysteries and wonders of Gz9p3, a window into the universe’s dawn.

Chapters:
00:00 Introduction.
00:54 Unveiling Gz9p3: A Glimpse into the Past.
03:16 Cosmic Collisions: Sculpting Galaxies.
05:03 Rethinking Early Universe Cosmology.
06:25 Outro.
07:13 Enjoy.

Continue reading “Older Than Time? Speck of light glimpsed by Hubble is truly an enormous old galaxy, JWST reveals” »

Mar 19, 2024

Largest-ever map of universe’s active supermassive black holes released

Posted by in category: cosmology

Astronomers have charted the largest-ever volume of the universe with a new map of active supermassive black holes living at the centers of galaxies. Called quasars, the gas-gobbling black holes are, ironically, some of the universe’s brightest objects.

Mar 19, 2024

The Next Generation of Tiny AI: Quantum Computing, Neuromorphic Chips, and Beyond

Posted by in categories: biotech/medical, information science, quantum physics, robotics/AI

Amidst rapid technological advancements, Tiny AI is emerging as a silent powerhouse. Imagine algorithms compressed to fit microchips yet capable of recognizing faces, translating languages, and predicting market trends. Tiny AI operates discreetly within our devices, orchestrating smart homes and propelling advancements in personalized medicine.

Tiny AI excels in efficiency, adaptability, and impact by utilizing compact neural networks, streamlined algorithms, and edge computing capabilities. It represents a form of artificial intelligence that is lightweight, efficient, and positioned to revolutionize various aspects of our daily lives.

Looking into the future, quantum computing and neuromorphic chips are new technologies taking us into unexplored areas. Quantum computing works differently than regular computers, allowing for faster problem-solving, realistic simulation of molecular interactions, and quicker decryption of codes. It is not just a sci-fi idea anymore; it’s becoming a real possibility.

Mar 19, 2024

Bridging the Gap Between AI and Neuromorphic Computing

Posted by in categories: information science, robotics/AI

In the rapidly evolving landscape of artificial intelligence, the quest for hardware that can keep pace with the burgeoning computational demands is relentless. A significant breakthrough in this quest has been achieved through a collaborative effort spearheaded by Purdue University, alongside the University of California San Diego (UCSD) and École Supérieure de Physique et de Chimie Industrielles (ESPCI) in Paris. This collaboration marks a pivotal advancement in the field of neuromorphic computing, a revolutionary approach that seeks to emulate the human brain’s mechanisms within computing architecture.

The Challenges of Current AI Hardware

The rapid advancements in AI have ushered in complex algorithms and models, demanding an unprecedented level of computational power. Yet, as we delve deeper into the realms of AI, a glaring challenge emerges: the inadequacy of current silicon-based computer architectures in keeping pace with the evolving demands of AI technology.

Mar 19, 2024

Comparing feedforward and recurrent neural network architectures with human behavior in artificial grammar learning

Posted by in category: robotics/AI

Scientific Reports –a crucial aspect of language acquisition. Prior experimental studies proved that artificial grammars can be learnt by human subjects after little exposure and often without explicit knowledge of the underlying rules. We tested four grammars with different complexity levels both in humans and in feedforward and recurrent networks. Our results show that both architectures can “learn” (via error back-propagation) the grammars after the same number of training sequences as humans do, but recurrent networks perform closer to humans than feedforward ones, irrespective of the grammar complexity level. Moreover, similar to visual processing, in which feedforward and recurrent architectures have been related to unconscious and conscious processes, the difference in performance between architectures over ten regular grammars shows that simpler and more explicit grammars are better learnt by recurrent architectures, supporting the hypothesis that explicit learning is best modeled by recurrent networks, whereas feedforward networks supposedly capture the dynamics involved in implicit learning.

Page 255 of 11,065First252253254255256257258259Last