Toggle light / dark theme

The human brain is probably the most complex thing in the universe. Apart from the human brain, no other system can automatically acquire new information and learn new skills, perform multimodal collaborative perception and information memory processing, make effective decisions in complex environments, and work stably with low power consumption. In this way, brain-inspired research can greatly advance the development of a new generation of artificial intelligence (AI) technologies.

Powered by new machine learning algorithms, effective large-scale labeled datasets, and superior computing power, AI programs have surpassed humans in speed and accuracy on certain tasks. However, most of the existing AI systems solve practical tasks from a computational perspective, eschewing most neuroscientific details, and tending to brute force optimization and large amounts of input data, making the implemented intelligent systems only suitable for solving specific types of problems. The long-term goal of brain-inspired intelligence research is to realize a general intelligent system. The main task is to integrate the understanding of multi-scale structure of the human brain and its information processing mechanisms, and build a cognitive brain computing model that simulates the cognitive function of the brain.

The Schwartz Reisman Institute for Technology and Society and the Department of Computer Science at the University of Toronto, in collaboration with the Vector Institute for Artificial Intelligence and the Cosmic Future Initiative at the Faculty of Arts & Science, present Geoffrey Hinton on October 27, 2023, at the University of Toronto.

0:00:00 — 0:07:20 Opening remarks and introduction.
0:07:21 — 0:08:43 Overview.
0:08:44 — 0:20:08 Two different ways to do computation.
0:20:09 — 0:30:11 Do large language models really understand what they are saying?
0:30:12 — 0:49:50 The first neural net language model and how it works.
0:49:51 — 0:57:24 Will we be able to control super-intelligence once it surpasses our intelligence?
0:57:25 — 1:03:18 Does digital intelligence have subjective experience?
1:03:19 — 1:55:36 Q&A
1:55:37 — 1:58:37 Closing remarks.

Talk title: “Will digital intelligence replace biological intelligence?”

Abstract: Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but in return they allow exactly the same model to be run on physically different pieces of hardware, which makes the model immortal. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and mimic biology by using very low power analog computation that makes use of the idiosynchratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. Using the idiosynchratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one but education is a slow and painful process. By contrast, digital computation makes it possible to run many copies of exactly the same model on different pieces of hardware. Thousands of identical digital agents can look at thousands of different datasets and share what they have learned very efficiently by averaging their weight changes. That is why chatbots like GPT-4 and Gemini can learn thousands of times more than any one person. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large-scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us. The fact that digital intelligences are immortal and did not evolve should make them less susceptible to religion and wars, but if a digital super-intelligence ever wanted to take control it is unlikely that we could stop it, so the most urgent research question in AI is how to ensure that they never want to take control.

We present a new route to ergodicity breaking via Hilbert space fragmentation that displays an unprecedented level of robustness. Our construction relies on a single emergent (prethermal) conservation law. In the limit when the conservation law is exact, we prove the emergence of Hilbert space fragmentation with an exponential number of frozen configurations. These configurations are low-entanglement states in the middle of the energy spectrum and therefore constitute examples of quantum many-body scars. We further prove that every frozen configuration is absolutely stable to arbitrary perturbations, to all finite orders in perturbation theory.

Sarcasm, a complex linguistic phenomenon often found in online communication, often serves as a means to express deep-seated opinions or emotions in a particular manner that can be in some sense witty, passive-aggressive, or more often than not demeaning or ridiculing to the person being addressed. Recognizing sarcasm in the written word is crucial for understanding the true intent behind a given statement, particularly when we are considering social media or online customer reviews.

While spotting that someone is being sarcastic in the offline world is usually fairly easy given facial expression, and other indicators, it is harder to decipher sarcasm in online text. New work published in the International Journal of Wireless and Mobile Computing hopes to meet this challenge. Geeta Abakash Sahu and Manoj Hudnurkar of the Symbiosis International University in Pune, India, have developed an advanced sarcasm detection model aimed at accurately identifying sarcastic remarks in digital conversations, a task crucial for understanding the true intent behind online statements.

The team’s model comprises four main phases. It begins with text pre-processing, which involves filtering out common, or “noise,” words such as “the,” “it,” and “and.” It then breaks down the text into smaller units. To address the challenge of dealing with a large number of features, the team used optimal feature selection techniques to ensure the model’s efficiency by prioritizing only the most relevant features. Features indicative of sarcasm, such as information gain, chi-square, mutual information, and symmetrical uncertainty, are then extracted from this pre-processed data by the algorithm.

Issues such as abrupt changes in speed limits and incomplete lane markings are among the most influential factors that can predict road crashes, finds new research by University of Massachusetts Amherst engineers. The study then used machine learning to predict which roads may be the most dangerous based on these features.

Published in the journal Transportation Research Record, the study was a collaboration between UMass Amherst civil and environmental engineers Jimi Oke, assistant professor; Eleni Christofa, associate professor; and Simos Gerasimidis, associate professor; and from Egnatia Odos, a publicly owned engineering firm in Greece.

The most influential features included road design issues (such as changes in that are too abrupt or guardrail issues), pavement damage (cracks that stretch across the road and webbed cracking referred to as “alligator” cracking), and incomplete signage and road markings.

Scentists have uncovered a fascinating link between ancient viruses and the development of myelination, the biological process crucial for the advanced functioning of the nervous system in vertebrates, including humans.


Scientists discovered a gene, ‘RetroMyelin,’ from ancient viruses, essential for myelination in vertebrates, suggesting viral sequences in early vertebrate genomes were pivotal for developing complex brains. This breakthrough in Cell unravels how myelination evolved, highlighting its significance in vertebrate diversity.