Toggle light / dark theme

At some point, theoretical physics shades into science fiction. This is a beautiful little book, by a celebrated physicist and writer, about a phenomenon that is permitted by equations but might not actually exist. Or perhaps white holes do exist, and are everywhere: we just haven’t noticed them yet. No such controversy exists about black holes, wh…

Amidst rapid technological advancements, Tiny AI is emerging as a silent powerhouse. Imagine algorithms compressed to fit microchips yet capable of recognizing faces, translating languages, and predicting market trends. Tiny AI operates discreetly within our devices, orchestrating smart homes and propelling advancements in personalized medicine.

Tiny AI excels in efficiency, adaptability, and impact by utilizing compact neural networks, streamlined algorithms, and edge computing capabilities. It represents a form of artificial intelligence that is lightweight, efficient, and positioned to revolutionize various aspects of our daily lives.

Looking into the future, quantum computing and neuromorphic chips are new technologies taking us into unexplored areas. Quantum computing works differently than regular computers, allowing for faster problem-solving, realistic simulation of molecular interactions, and quicker decryption of codes. It is not just a sci-fi idea anymore; it’s becoming a real possibility.

In the rapidly evolving landscape of artificial intelligence, the quest for hardware that can keep pace with the burgeoning computational demands is relentless. A significant breakthrough in this quest has been achieved through a collaborative effort spearheaded by Purdue University, alongside the University of California San Diego (UCSD) and École Supérieure de Physique et de Chimie Industrielles (ESPCI) in Paris. This collaboration marks a pivotal advancement in the field of neuromorphic computing, a revolutionary approach that seeks to emulate the human brain’s mechanisms within computing architecture.

The Challenges of Current AI Hardware

The rapid advancements in AI have ushered in complex algorithms and models, demanding an unprecedented level of computational power. Yet, as we delve deeper into the realms of AI, a glaring challenge emerges: the inadequacy of current silicon-based computer architectures in keeping pace with the evolving demands of AI technology.

In the digital age, where entertainment is but a click away, a silent yet powerful transformation is underway. Streaming companies, the vanguards of this digital entertainment era, are not just delivering content; they’re crafting experiences, and artificial intelligence (AI) is their most adept tool. Let us explore how AI is not just changing, but revolutionizing the way we consume media.

Gone are the days of aimlessly browsing channels to find something to watch. AI in streaming services is like a discerning director, understanding and curating content to fit the unique tastes of each viewer. It’s an era where your streaming service knows what you want to watch, sometimes even before you do. The great power of AI is personalization, where organizations can create unique user journeys. At the core of AI’s integration into streaming is personalization. Netflix, the colossus of streaming, employs AI algorithms to recommend movies and shows based on your viewing history. However, generally, these recommendation engines based on historical presences have muted value. Traditional metrics leverage past viewing information or collaborative filtering to make content recommendations. However, customer feedback has shown these are imperfect fits in the age of data for precision product-market fit.

The Schwartz Reisman Institute for Technology and Society and the Department of Computer Science at the University of Toronto, in collaboration with the Vector Institute for Artificial Intelligence and the Cosmic Future Initiative at the Faculty of Arts \& Science, present Geoffrey Hinton on October 27, 2023, at the University of Toronto.

0:00:00 — 0:07:20 Opening remarks and introduction.
0:07:21 — 0:08:43 Overview.
0:08:44 — 0:20:08 Two different ways to do computation.
0:20:09 — 0:30:11 Do large language models really understand what they are saying?
0:30:12 — 0:49:50 The first neural net language model and how it works.
0:49:51 — 0:57:24 Will we be able to control super-intelligence once it surpasses our intelligence?
0:57:25 — 1:03:18 Does digital intelligence have subjective experience?
1:03:19 — 1:55:36 Q\&A
1:55:37 — 1:58:37 Closing remarks.

Talk title: “Will digital intelligence replace biological intelligence?”

Abstract: Digital computers were designed to allow a person to tell them exactly what to do. They require high energy and precise fabrication, but in return they allow exactly the same model to be run on physically different pieces of hardware, which makes the model immortal. For computers that learn what to do, we could abandon the fundamental principle that the software should be separable from the hardware and mimic biology by using very low power analog computation that makes use of the idiosynchratic properties of a particular piece of hardware. This requires a learning algorithm that can make use of the analog properties without having a good model of those properties. Using the idiosynchratic analog properties of the hardware makes the computation mortal. When the hardware dies, so does the learned knowledge. The knowledge can be transferred to a younger analog computer by getting the younger computer to mimic the outputs of the older one but education is a slow and painful process. By contrast, digital computation makes it possible to run many copies of exactly the same model on different pieces of hardware. Thousands of identical digital agents can look at thousands of different datasets and share what they have learned very efficiently by averaging their weight changes. That is why chatbots like GPT-4 and Gemini can learn thousands of times more than any one person. Also, digital computation can use the backpropagation learning procedure which scales much better than any procedure yet found for analog hardware. This leads me to believe that large-scale digital computation is probably far better at acquiring knowledge than biological computation and may soon be much more intelligent than us. The fact that digital intelligences are immortal and did not evolve should make them less susceptible to religion and wars, but if a digital super-intelligence ever wanted to take control it is unlikely that we could stop it, so the most urgent research question in AI is how to ensure that they never want to take control.

About Geoffrey Hinton.

Geoffrey Hinton received his PhD in artificial intelligence from Edinburgh in 1978. After five years as a faculty member at Carnegie Mellon he became a fellow of the Canadian Institute for Advanced Research and moved to the Department of Computer Science at the University of Toronto, where he is now an emeritus professor. In 2013, Google acquired Hinton’s neural networks startup, DNN research, which developed out of his research at U of T. Subsequently, Hinton was a Vice President and Engineering Fellow at Google until 2023. He is a founder of the Vector Institute for Artificial Intelligence where he continues to serve as Chief Scientific Adviser.

Imagine your laptop running twice as fast without any hardware upgrades; only the application of smarter software algorithms. That’s the promise of new research that could change how today’s devices function.

The team behind the research, from the University of California, Riverside (UCR), says that the work has huge potential, not just for boosting hardware performance but also increasing efficiency and significantly reducing energy use.

Referred to as simultaneous and heterogeneous multithreading (SHMT), the innovative process takes advantage of the fact modern phones, computers, and other gadgets usually rely on more than one processor to do their thinking.

Having refined its charging algorithms, Polar Night Energy is now ready to scale up the storage tech in Pornainen.

Once completed, the new battery will be integrated with the network of Loviisan Lämpö, the Finnish heating company that supplies district heating in the area.

“Loviisan Lämpö is moving towards more environmentally friendly energy production. With the Sand Battery, we can significantly reduce energy produced by combustion and completely eliminate the use of oil,” says CEO Mikko Paajanen.

The film industry, always at the forefront of technological innovation, is increasingly embracing artificial intelligence (AI) to revolutionize movie production, distribution, and marketing. From script analysis to post-production, Already AI is reshaping how movies are made and consumed. Let’s explore the current applications of AI in movie studios and speculates on future uses, highlighting real examples and the transformative impact of these technologies.

AI’s infiltration into the movie industry begins at the scriptwriting stage. Tools like ScriptBook use natural language processing to analyze scripts, predict box office success, and offer insights into plot and character development. For instance, 20th Century Fox employed AI to analyze the script of Logan, which helped in making informed decisions about the movie’s plot and themes. Consider, in pre-production, AI has also aided in casting and location scouting. Warner Bros. partnered with Cinelytic to use AI for casting decisions, evaluating an actor’s market value to predict a film’s financial success. For example, let’s look at location scouting. AI algorithms can sift through thousands of hours of footage to identify suitable filming locations, streamlining what was once a time-consuming process.

During filmmaking, AI plays a crucial role in visual effects (VFX). Disney’s FaceDirector software can generate composite expressions from multiple takes, enabling directors to adjust an actor’s performance in post-production. This technology was notably used in Avengers: Infinity War to perfect emotional expressions in complex CGI scenes. Conversely, AI-driven software like deepfake technology, though controversial, has been used to create realistic face swaps in movies. For instance, it was used in The Irishman to de-age actors, offering a cost-effective alternative to traditional CGI. Additionally, AI is used in color grading and editing. IBM Watson was used to create the movie trailer for Morgan, analyzing visuals, sounds, and compositions from other movie trailers to determine what would be most appealing to audiences.