Toggle light / dark theme

The Schwartz Reisman Institute for Technology and Society and the Department of Computer Science at the University of Toronto, in collaboration with the Vector Institute for Artificial Intelligence and the Cosmic Future Initiative at the Faculty of Arts \& Science, present Geoffrey Hinton on October 27, 2023, at the University of Toronto.

0:00:00 — 0:07:20 Opening remarks and introduction.
0:07:21 — 0:08:43 Overview.
0:08:44 — 0:20:08 Two different ways to do computation.
0:20:09 — 0:30:11 Do large language models really understand what they are saying?
0:30:12 — 0:49:50 The first neural net language model and how it works.
0:49:51 — 0:57:24 Will we be able to control super-intelligence once it surpasses our intelligence?
0:57:25 — 1:03:18 Does digital intelligence have subjective experience?
1:03:19 — 1:55:36 Q\&A
1:55:37 — 1:58:37 Closing remarks.

Talk title: “Will digital intelligence replace biological intelligence?”

Abstract: Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but in return they allow exactly the same model to be run on physically different pieces of hardware, which makes the model immortal. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and mimic biology by using very low power analog computation that makes use of the idiosynchratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. Using the idiosynchratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one but education is a slow and painful process. By contrast, digital computation makes it possible to run many copies of exactly the same model on different pieces of hardware. Thousands of identical digital agents can look at thousands of different datasets and share what they have learned very efficiently by averaging their weight changes. That is why chatbots like GPT-4 and Gemini can learn thousands of times more than any one person. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large-scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us. The fact that digital intelligences are immortal and did not evolve should make them less susceptible to religion and wars, but if a digital super-intelligence ever wanted to take control it is unlikely that we could stop it, so the most urgent research question in AI is how to ensure that they never want to take control.

Chinese ambassador Chen Xu called for the high-quality development of artificial intelligence (AI), assistance in promoting children’s mental health, and protection of children’s rights while delivering a joint statement on behalf of 80 countries at the 55th session of the United Nations Human Rights Council (UNHRC) on Thursday.

Chen, China’s permanent representative to the UN Office in Geneva and other international organizations in Switzerland, said that artificial intelligence is a new field of human development and should adhere to the concept of consultation, joint construction, and shared benefits, while working together to promote the governance of artificial intelligence.

The new generation of children has become one of the main groups using and benefiting from AI technology. The joint statement emphasized the importance of children’s mental health issues.

🧠 New Graph Neural Network Technique 🔥

NVIDIA researchers developed WholeGraphStor, a novel #GNN memory optimization.

Storing entire graphs in a compressed format reduces memory footprint,…


Graph neural networks (GNNs) have revolutionized machine learning for graph-structured data. Unlike traditional neural networks, GNNs are good at capturing intricate relationships in graphs, powering applications from social networks to chemistry. They shine particularly in scenarios like node classification, where they predict labels for graph nodes, and link prediction, where they determine the presence of edges between nodes.

There is a new AI tool so smart that it can write code, create websites, and software with just a single prompt. Devin, created by the tech company Cognition, is the first AI software engineer. It can do pretty much everything you ask it to do. And the AI tool does not come with the intention to replace human engineers, it is designed to work hand-in-hand with them. The makers say that the AI tool has not been launched to replace human engineers but to make their lives easier.

“Today we’re excited to introduce Devin, the first AI software engineer. Devin is the new state-of-the-art on the SWE-Bench coding benchmark, has successfully passed practical engineering interviews from leading AI companies, and has even completed real jobs on Upwork. Devin is an autonomous agent that solves engineering tasks through the use of its own shell, code editor, and web browser,” Cognition posted on Twitter aka X.

What makes Devin stand out is its incredible ability to think ahead and plan complex tasks. It can make thousands of decisions, learn from its mistakes, and get better over time. Plus, it has all the tools a human engineer needs, like a code editor and browser, right at its digital fingertips. Devin is considered the most advanced or cutting-edge solution available for evaluating software engineering tasks based on the SWE-bench coding benchmark. Essentially, it performed exceptionally well compared to other solutions when tested against a standard set of software engineering problems. The AI tool performed well in practical engineering interviews conducted by top artificial intelligence companies. These interviews likely involved tasks and challenges relevant to the field of AI and software engineering, and the AI assistant managed to meet expectations.

1/ Amazon is using generative AI to help sellers create product pages.


Amazon is increasingly turning to generative AI to help its sellers create higher quality product pages. The technology promises to save time and improve visibility.

Amazon is testing a new AI feature that allows sellers to create product pages by simply copying and pasting a link. Existing product pages from other sites can be converted into Amazon-optimized listings.

The AI extracts information from the seller’s external site and uses it to create a product page on Amazon, complete with description and images. The goal is to save sellers time.

Following the announcement of its partnership with OpenAI, tech startup Figure has released a new clip of its humanoid robot, dubbed Figure 1, chatting with an engineer as it puts away the dishes.

And we can’t tell if we’re impressed — or terrified.

“I see a red apple on a plate in the center of the table, a drying rack with cups and a plate, and you standing nearby with your hand on the table,” the robot said in an uncanny voice, showing off OpenAI’s “speech-to-speech reasoning” skills.

AI startup company Figure, which emerged from stealth last year, has unveiled the latest upgrades to its Figure 1 humanoid robot.

Founded in 2022 and publicly announced in March 2023, Figure is a California-based company with 80 employees that is building autonomous, general‑purpose humanoid robots. Its aim is to address labour shortages, fill jobs that are undesirable or unsafe for humans, and support a supply chain on a global scale.

The company is backed by a number of tech leaders including Amazon founder Jeff Bezos, chipmaker NVIDIA, and Microsoft, and it recently announced a deal with ChatGPT‑maker OpenAI. Figure’s latest round of funding – which closed at $675 million – brought its total valuation to an impressive $2.6 billion.

The advent of AI has ushered in transformative advancements across countless industries. Yet for all its benefits, this technology also has a downside. One of the major challenges AI brings is the amount of energy required to power the GPUs that train large-scale AI models. Computing hardware needs significant maintenance and upkeep, as well as uninterruptible power supplies and cooling fans.

One study found that training some popular AI models can produce about 626,000 pounds of carbon dioxide, the rough equivalent of 300 cross-country flights in the U.S. A single data center can require enough electricity to power 50,000 homes. If this energy comes from fossil fuels, that can mean a huge carbon footprint. Already the carbon footprint of the cloud as a whole has surpassed that of the airline industry.

As the founder of an AI-driven company in the blockchain and cryptocurrency industry, I am acutely aware of the environmental impact of our business. Here are a few ways we are trying to reduce that effect.