Toggle light / dark theme

Visit https://brilliant.org/Veritasium/ to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription. Digital computers have served us well for decades, but the rise of artificial intelligence demands a totally new kind of computer: analog.

Thanks to Mike Henry and everyone at Mythic for the analog computing tour! https://www.mythic-ai.com/
Thanks to Dr. Bernd Ulmann, who created The Analog Thing and taught us how to use it. https://the-analog-thing.org.
Moore’s Law was filmed at the Computer History Museum in Mountain View, CA.
Welch Labs’ ALVINN video: https://www.youtube.com/watch?v=H0igiP6Hg1k.

▀▀▀
References:
Crevier, D. (1993). AI: The Tumultuous History Of The Search For Artificial Intelligence. Basic Books. – https://ve42.co/Crevier1993
Valiant, L. (2013). Probably Approximately Correct. HarperCollins. – https://ve42.co/Valiant2013
Rosenblatt, F. (1958). The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychological Review, 65, 386–408. – https://ve42.co/Rosenblatt1958
NEW NAVY DEVICE LEARNS BY DOING; Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser (1958). The New York Times, p. 25. – https://ve42.co/NYT1958
Mason, H., Stewart, D., and Gill, B. (1958). Rival. The New Yorker, p. 45. – https://ve42.co/Mason1958
Alvinn driving NavLab footage – https://ve42.co/NavLab.
Pomerleau, D. (1989). ALVINN: An Autonomous Land Vehicle In a Neural Network. NeurIPS, 1305-313. – https://ve42.co/Pomerleau1989
ImageNet website – https://ve42.co/ImageNet.
Russakovsky, O., Deng, J. et al. (2015). ImageNet Large Scale Visual Recognition Challenge. – https://ve42.co/ImageNetChallenge.
AlexNet Paper: Krizhevsky, A., Sutskever, I., Hinton, G. (2012). ImageNet Classification with Deep Convolutional Neural Networks. NeurIPS, (25)1, 1097–1105. – https://ve42.co/AlexNet.
Karpathy, A. (2014). Blog post: What I learned from competing against a ConvNet on ImageNet. – https://ve42.co/Karpathy2014
Fick, D. (2018). Blog post: Mythic @ Hot Chips 2018. – https://ve42.co/MythicBlog.
Jin, Y. & Lee, B. (2019). 2.2 Basic operations of flash memory. Advances in Computers, 114, 1–69. – https://ve42.co/Jin2019
Demler, M. (2018). Mythic Multiplies in a Flash. The Microprocessor Report. – https://ve42.co/Demler2018
Aspinity (2021). Blog post: 5 Myths About AnalogML. – https://ve42.co/Aspinity.
Wright, L. et al. (2022). Deep physical neural networks trained with backpropagation. Nature, 601, 49–555. – https://ve42.co/Wright2022
Waldrop, M. M. (2016). The chips are down for Moore’s law. Nature, 530144–147. – https://ve42.co/Waldrop2016

▀▀▀
Special thanks to Patreon supporters: Kelly Snook, TTST, Ross McCawley, Balkrishna Heroor, 65square.com, Chris LaClair, Avi Yashchin, John H. Austin, Jr., OnlineBookClub.org, Dmitry Kuzmichev, Matthew Gonzalez, Eric Sexton, john kiehl, Anton Ragin, Benedikt Heinen, Diffbot, Micah Mangione, MJP, Gnare, Dave Kircher, Burt Humburg, Blake Byers, Dumky, Evgeny Skvortsov, Meekay, Bill Linder, Paul Peijzel, Josh Hibschman, Mac Malkawi, Michael Schneider, jim buckmaster, Juan Benet, Ruslan Khroma, Robert Blum, Richard Sundvall, Lee Redden, Vincent, Stephen Wilcox, Marinus Kuivenhoven, Clayton Greenwell, Michael Krugman, Cy ‘kkm’ K’Nelson, Sam Lutfi, Ron Neal.

▀▀▀
Written by Derek Muller, Stephen Welch, and Emily Zhang.
Filmed by Derek Muller, Petr Lebedev, and Emily Zhang.
Animation by Iván Tello, Mike Radjabov, and Stephen Welch.
Edited by Derek Muller.
Additional video/photos supplied by Getty Images and Pond5
Music from Epidemic Sound.
Produced by Derek Muller, Petr Lebedev, and Emily Zhang.

▶ Check out Brilliant with this link to receive a 20% discount! https://brilliant.org/NewMind/

The millennia-old idea of expressing signals and data as a series of discrete states had ignited a revolution in the semiconductor industry during the second half of the 20th century. This new information age thrived on the robust and rapidly evolving field of digital electronics. The abundance of automation and tooling made it relatively manageable to scale designs in complexity and performance as demand grew. However, the power being consumed by AI and machine learning applications cannot feasibly grow as is on existing processing architectures.

THE MAC
In a digital neural network implementation, the weights and input data are stored in system memory and must be fetched and stored continuously through the sea of multiple-accumulate operations within the network. This approach results in most of the power being dissipated in fetching and storing model parameters and input data to the arithmetic logic unit of the CPU, where the actual multiply-accumulate operation takes place. A typical multiply-accumulate operation within a general-purpose CPU consumes more than two orders of magnitude greater than the computation itself.

GPUs.
Their ability to processes 3D graphics requires a larger number of arithmetic logic units coupled to high-speed memory interfaces. This characteristic inherently made them far more efficient and faster for machine learning by allowing hundreds of multiple-accumulate operations to process simultaneously. GPUs tend to utilize floating-point arithmetic, using 32 bits to represent a number by its mantissa, exponent, and sign. Because of this, GPU targeted machine learning applications have been forced to use floating-point numbers.

ASICS
These dedicated AI chips are offer dramatically larger amounts of data movement per joule when compared to GPUs and general-purpose CPUs. This came as a result of the discovery that with certain types of neural networks, the dramatic reduction in computational precision only reduced network accuracy by a small amount. It will soon become infeasible to increase the number of multiply-accumulate units integrated onto a chip, or reduce bit-precision further.

LOW POWER AI

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

The mainframe, the hardware stalwart that has existed for decades, is continuing to be a force in the modern era.

Among the vendors that still build mainframes is IBM, which today announced the latest iteration of its Linux-focused mainframe system, dubbed the LinuxOne Emperor 4. IBM has been building LinuxOne systems since 2015, when the first Emperor mainframe made its debut, and has been updating the platform on a roughly two-year cadence.

😲


A grim future awaits the United States if it loses the competition with China on developing key technologies like artificial intelligence in the near future, the authors of a special government-backed study told reporters on Monday.

If China wins the technological competition, it can use its advancements in artificial intelligence and biological technology to enhance its own country’s economy, military and society to the determent of others, said Bob Work, former deputy defense secretary and co-chair of the Special Competitive Studies Project, which examined international artificial intelligence and technological competition. Work is the chair of the U.S. Naval Institute Board of Directors.

Losing, in Work’s opinion, means that U.S. security will be threatened as China is able to establish global surveillance, companies will lose trillions of dollars and America will be reliant on China or other countries under Chinese influence for core technologies.

In recent years, engineers and computer scientists have created a wide range of technological tools that can enhance fitness training experiences, including smart watches, fitness trackers, sweat-resistant earphones or headphones, smart home gym equipment and smartphone applications. New state-of-the-art computational models, particularly deep learning algorithms, have the potential to improve these tools further, so that they can better meet the needs of individual users.

Researchers at University of Brescia in Italy have recently developed a computer vision system for a smart mirror that could improve the effectiveness of fitness training both in home and gym environments. This system, introduced in a paper published by the International Society of Biomechanics in Sports, is based on a deep learning algorithm trained to recognize human gestures in video recordings.

“Our commercial partner ABHorizon invented the concept of a product that can guide and teach you during your personal fitness training,” Bernardo Lanza, one of the researchers who carried out the study, told TechXplore. “This device can show you the best way to train based on your specific needs. To develop this device further, they asked us to investigate the viability of an integrated vision system for exercise evaluation.”

How can mobile robots perceive and understand the environment correctly, even if parts of the environment are occluded by other objects? This is a key question that must be solved for self-driving vehicles to safely navigate in large crowded cities. While humans can imagine complete physical structures of objects even when they are partially occluded, existing artificial intelligence (AI) algorithms that enable robots and self-driving vehicles to perceive their environment do not have this capability.

Robots with AI can already find their way around and navigate on their own once they have learned what their environment looks like. However, perceiving the entire structure of objects when they are partially hidden, such as people in crowds or vehicles in traffic jams, has been a significant challenge. A major step towards solving this problem has now been taken by Freiburg robotics researchers Prof. Dr. Abhinav Valada and Ph.D. student Rohit Mohan from the Robot Learning Lab at the University of Freiburg, which they have presented in two joint publications.

The two Freiburg scientists have developed the amodal panoptic segmentation task and demonstrated its feasibility using novel AI approaches. Until now, self-driving vehicles have used panoptic segmentation to understand their surroundings.

AN ARTIFICIAL intelligence text-to-image model has forecasted a disturbing end to mankind’s existence.

The popular Craiyon AI, formerly DALL-E mini AI image generator, designed some barren landscapes and scorched plains when prompted to predict the end of humans.

The AI has been trained to create its masterpieces using unfiltered data from the internet.

ROBOTS could one day overthrow humans in an ‘apocalyptic’ takeover, a tech expert has predicted.

Aidan Meller, the creator of the Ai-Da robot, believes that within three years artificial intelligence (AI) could overtake humanity, per The Daily Star.

He also backs Elon Musk’s belief that advances in AI could impact mankind more than nuclear war.

Being able to decode brainwaves could help patients who have lost the ability to speak to communicate again, and could ultimately provide novel ways for humans to interact with computers. Now Meta researchers have shown they can tell what words someone is hearing using recordings from non-invasive brain scans.

Our ability to probe human brain activity has improved significantly in recent decades as scientists have developed a variety of brain-computer interface (BCI) technologies that can provide a window into our thoughts and intentions.

The most impressive results have come from invasive recording devices, which implant electrodes directly into the brain’s gray matter, combined with AI that can learn to interpret brain signals. In recent years, this has made it possible to decode complete sentences from someone’s neural activity with 97 percent accuracy, and translate attempted handwriting movements directly into text at speeds comparable to texting.