Toggle light / dark theme

In the digital age, where entertainment is but a click away, a silent yet powerful transformation is underway. Streaming companies, the vanguards of this digital entertainment era, are not just delivering content; they’re crafting experiences, and artificial intelligence (AI) is their most adept tool. Let us explore how AI is not just changing, but revolutionizing the way we consume media.

Gone are the days of aimlessly browsing channels to find something to watch. AI in streaming services is like a discerning director, understanding and curating content to fit the unique tastes of each viewer. It’s an era where your streaming service knows what you want to watch, sometimes even before you do. The great power of AI is personalization, where organizations can create unique user journeys. At the core of AI’s integration into streaming is personalization. Netflix, the colossus of streaming, employs AI algorithms to recommend movies and shows based on your viewing history. However, generally, these recommendation engines based on historical presences have muted value. Traditional metrics leverage past viewing information or collaborative filtering to make content recommendations. However, customer feedback has shown these are imperfect fits in the age of data for precision product-market fit.

The Schwartz Reisman Institute for Technology and Society and the Department of Computer Science at the University of Toronto, in collaboration with the Vector Institute for Artificial Intelligence and the Cosmic Future Initiative at the Faculty of Arts \& Science, present Geoffrey Hinton on October 27, 2023, at the University of Toronto.

0:00:00 — 0:07:20 Opening remarks and introduction.
0:07:21 — 0:08:43 Overview.
0:08:44 — 0:20:08 Two different ways to do computation.
0:20:09 — 0:30:11 Do large language models really understand what they are saying?
0:30:12 — 0:49:50 The first neural net language model and how it works.
0:49:51 — 0:57:24 Will we be able to control super-intelligence once it surpasses our intelligence?
0:57:25 — 1:03:18 Does digital intelligence have subjective experience?
1:03:19 — 1:55:36 Q\&A
1:55:37 — 1:58:37 Closing remarks.

Talk title: “Will digital intelligence replace biological intelligence?”

Abstract: Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but in return they allow exactly the same model to be run on physically different pieces of hardware, which makes the model immortal. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and mimic biology by using very low power analog computation that makes use of the idiosynchratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. Using the idiosynchratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one but education is a slow and painful process. By contrast, digital computation makes it possible to run many copies of exactly the same model on different pieces of hardware. Thousands of identical digital agents can look at thousands of different datasets and share what they have learned very efficiently by averaging their weight changes. That is why chatbots like GPT-4 and Gemini can learn thousands of times more than any one person. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large-scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us. The fact that digital intelligences are immortal and did not evolve should make them less susceptible to religion and wars, but if a digital super-intelligence ever wanted to take control it is unlikely that we could stop it, so the most urgent research question in AI is how to ensure that they never want to take control.

Imagine your laptop running twice as fast without any hardware upgrades; only the application of smarter software algorithms. That’s the promise of new research that could change how today’s devices function.

The team behind the research, from the University of California, Riverside (UCR), says that the work has huge potential, not just for boosting hardware performance but also increasing efficiency and significantly reducing energy use.

Referred to as simultaneous and heterogeneous multithreading (SHMT), the innovative process takes advantage of the fact modern phones, computers, and other gadgets usually rely on more than one processor to do their thinking.

Having refined its charging algorithms, Polar Night Energy is now ready to scale up the storage tech in Pornainen.

Once completed, the new battery will be integrated with the network of Loviisan Lämpö, the Finnish heating company that supplies district heating in the area.

“Loviisan Lämpö is moving towards more environmentally friendly energy production. With the Sand Battery, we can significantly reduce energy produced by combustion and completely eliminate the use of oil,” says CEO Mikko Paajanen.

The film industry, always at the forefront of technological innovation, is increasingly embracing artificial intelligence (AI) to revolutionize movie production, distribution, and marketing. From script analysis to post-production, Already AI is reshaping how movies are made and consumed. Let’s explore the current applications of AI in movie studios and speculates on future uses, highlighting real examples and the transformative impact of these technologies.

AI’s infiltration into the movie industry begins at the scriptwriting stage. Tools like ScriptBook use natural language processing to analyze scripts, predict box office success, and offer insights into plot and character development. For instance, 20th Century Fox employed AI to analyze the script of Logan, which helped in making informed decisions about the movie’s plot and themes. Consider, in pre-production, AI has also aided in casting and location scouting. Warner Bros. partnered with Cinelytic to use AI for casting decisions, evaluating an actor’s market value to predict a film’s financial success. For example, let’s look at location scouting. AI algorithms can sift through thousands of hours of footage to identify suitable filming locations, streamlining what was once a time-consuming process.

During filmmaking, AI plays a crucial role in visual effects (VFX). Disney’s FaceDirector software can generate composite expressions from multiple takes, enabling directors to adjust an actor’s performance in post-production. This technology was notably used in Avengers: Infinity War to perfect emotional expressions in complex CGI scenes. Conversely, AI-driven software like deepfake technology, though controversial, has been used to create realistic face swaps in movies. For instance, it was used in The Irishman to de-age actors, offering a cost-effective alternative to traditional CGI. Additionally, AI is used in color grading and editing. IBM Watson was used to create the movie trailer for Morgan, analyzing visuals, sounds, and compositions from other movie trailers to determine what would be most appealing to audiences.

Multimodal #AI for better prevention and treatment of cardiometabolic diseases.


The rise of artificial intelligence (AI) has revolutionized various scientific fields, particularly in medicine, where it has enabled the modeling of complex relationships from massive datasets. Initially, AI algorithms focused on improved interpretation of diagnostic studies such as chest X-rays and electrocardiograms in addition to predicting patient outcomes and future disease onset. However, AI has evolved with the introduction of transformer models, allowing analysis of the diverse, multimodal data sources existing in medicine today.

The elevated cardiovascular disease risk among people with HIV is even greater than predicted by a standard risk calculator in several groups, including Black people and cisgender women, according to analyses from a large international clinical trial primarily funded by the National institutes of Health and presented at the 2024 Conference on Retroviruses and Opportunistic Infections (CROI) in Denver. The risk of having a first major cardiovascular event was also higher than previously predicted for people from high-income regions and those whose HIV replication was not suppressed below detectable levels.

Researchers examined the incidence of major adverse cardiovascular events in people who did not take pitavastatin or other statins during the Randomized Trial to Prevent Vascular Events in HIV (REPRIEVE) trial, a large clinical trial to test whether pitavastatin—a cholesterol-lowering drug known to prevent cardiovascular disease—could prevent major adverse cardiovascular events, such as heart attacks and strokes, in people with HIV. The scientists compared the incidence of cardiovascular events in the trial to the incidence predicted by standard estimates, which use the American College of Cardiology and American Heart Association’s Pooled Cohort Risk Equations (PCE) score.

They found that the rate of cardiovascular events occurring in many groups of people differed from predicted rates, even considering that people with HIV have a higher overall risk of cardiovascular disease than people without HIV, including double the risk of major adverse cardiovascular events. Notably, in high-income regions—as defined by the global burden of disease classification system—including North and South America and Europe, cardiovascular event rates were higher overall, with cisgender women experiencing about two and a half times more events than predicted, and Black participants having more than 50% higher event rates than predicted.

For decades, scientists and pathologists have tried, without much success, to come up with a way to determine which individual lung cancer patients are at greatest risk of having their illness spread, or metastasize, to other parts of the body.

Now a team of scientists from Caltech and the Washington University School of Medicine in St. Louis has fed that problem to (AI) algorithms, asking computers to predict which cancer cases are likely to metastasize. In a novel of non-small cell lung cancer (NSCLC) patients, AI outperformed expert pathologists in making such predictions.

These predictions about the progression of lung cancer have important implications in terms of an individual patient’s life. Physicians treating early-stage NSCLC patients face the extremely difficult decision of whether to intervene with expensive, toxic treatments, such as chemotherapy or radiation, after a patient undergoes lung surgery. In some ways, this is the more cautious path because more than half of stage I–III NSCLC patients eventually experience metastasis to the brain. But that means many others do not. For those patients, such difficult treatments are wholly unnecessary.

The idea of objects seamlessly disappearing, not just in controlled laboratory environments but also in real-world scenarios, has long captured the popular imagination. This concept epitomizes the trajectory of human civilization, from primitive camouflage techniques to the sophisticated metamaterial-based cloaks of today.

Recently, this goal was further highlighted in Science, as one of the “125 questions: exploration and discovery.” Researchers from Zhejiang University have made strides in this direction by demonstrating an intelligent aero amphibious invisibility cloak. This cloak can maintain invisibility amidst dynamic environments, neutralizing external stimuli.

Despite decades of research and the emergence of numerous invisibility cloak prototypes, achieving an aero amphibious cloak capable of manipulating electromagnetic scattering in against ever-changing landscapes remains a formidable challenge. The hurdles are multifaceted, ranging from the need for complex-amplitude tunable metasurfaces to the absence of intelligent algorithms capable of addressing inherent issues such as non-uniqueness and incomplete inputs.

A novel architecture for optical neural networks utilizes wavefront shaping to precisely manipulate the travel of ultrashort pulses through multimode fibers, enabling nonlinear optical computation.

Present-day artificial intelligence systems rely on billions of adjustable parameters to accomplish complex objectives. Yet, the vast quantity of these parameters incurs significant expenses. The training and implementation of such extensive models demand considerable memory and processing power, available only in enormous data center facilities, consuming energy on par with the electrical demands of medium-sized cities. In response, researchers are currently reevaluating both the computing infrastructure and the machine learning algorithms to ensure the sustainable advancement of artificial intelligence continues at its current rate.

Optical implementation of neural network architectures is a promising avenue because of the low-power implementation of the connections between the units. New research reported in Advanced Photonics combines light propagation inside multimode fibers with a small number of digitally programmable parameters and achieves the same performance on image classification tasks with fully digital systems with more than 100 times more programmable parameters.