Oscilloscopes were once commonly called CROs, for the fact that they relied on cathode ray tubes for display. Since then, technology has moved quickly, and oscilloscopes these days almost entirely rely on modern screens like LCDs. However, [lonesoulsurfer] went a different route with this fun DIY build, creating an oscilloscope with a low-resolution LED display.
Yes, the signals are shown on a 10×10 matrix made up of red LEDs. The individual pixels look nicely diffused and chunky thanks to the fact that [lonesoulsurfer] was able to source square 5mm LEDs for the build. The whole project only uses four ICs – a decade counter and a LM3914 LED driver to run the display, a 555 timer for clock input, and an LM386 op-amp for amplifying incoming signals.
With a mic fitted onboard, the oscilloscope can act as a simple music visualizer, or be used with a probe to investigate actual circuits. It may not be of great enough resolution or precision for fine work, but it’ll at least tell you if your microcontroller’s clock is running properly if you’re scratching your head about the function of a simple project.
Hannah wraps up the series by meeting DeepMind co-founder and CEO, Demis Hassabis. In an extended interview, Demis describes why he believes AGI is possible, how we can get there, and the problems he hopes it will solve. Along the way, he highlights the important role of consciousness and why he’s so optimistic that AI can help solve many of the world’s major challenges. As a final note, Demis shares the story of a personal meeting with Stephen Hawking to discuss the future of AI and discloses Hawking’s parting message.
Interviewee: Deepmind co-founder and CEO, Demis Hassabis.
Credits. Presenter: Hannah Fry. Series Producer: Dan Hardoon. Production support: Jill Achineku. Sounds design: Emma Barnaby. Music composition: Eleni Shaw. Sound Engineer: Nigel Appleton. Editor: David Prest. Commissioned by DeepMind.
Thank you to everyone who made this season possible!
Visit https://brilliant.org/Veritasium/ to get started learning STEM for free, and the first 200 people will get 20% off their annual premium subscription. Digital computers have served us well for decades, but the rise of artificial intelligence demands a totally new kind of computer: analog.
▀▀▀ References: Crevier, D. (1993). AI: The Tumultuous History Of The Search For Artificial Intelligence. Basic Books. – https://ve42.co/Crevier1993 Valiant, L. (2013). Probably Approximately Correct. HarperCollins. – https://ve42.co/Valiant2013 Rosenblatt, F. (1958). The Perceptron: A Probabilistic Model for Information Storage and Organization in the Brain. Psychological Review, 65, 386–408. – https://ve42.co/Rosenblatt1958 NEW NAVY DEVICE LEARNS BY DOING; Psychologist Shows Embryo of Computer Designed to Read and Grow Wiser (1958). The New York Times, p. 25. – https://ve42.co/NYT1958 Mason, H., Stewart, D., and Gill, B. (1958). Rival. The New Yorker, p. 45. – https://ve42.co/Mason1958 Alvinn driving NavLab footage – https://ve42.co/NavLab. Pomerleau, D. (1989). ALVINN: An Autonomous Land Vehicle In a Neural Network. NeurIPS, 1305-313. – https://ve42.co/Pomerleau1989 ImageNet website – https://ve42.co/ImageNet. Russakovsky, O., Deng, J. et al. (2015). ImageNet Large Scale Visual Recognition Challenge. – https://ve42.co/ImageNetChallenge. AlexNet Paper: Krizhevsky, A., Sutskever, I., Hinton, G. (2012). ImageNet Classification with Deep Convolutional Neural Networks. NeurIPS, (25)1, 1097–1105. – https://ve42.co/AlexNet. Karpathy, A. (2014). Blog post: What I learned from competing against a ConvNet on ImageNet. – https://ve42.co/Karpathy2014 Fick, D. (2018). Blog post: Mythic @ Hot Chips 2018. – https://ve42.co/MythicBlog. Jin, Y. & Lee, B. (2019). 2.2 Basic operations of flash memory. Advances in Computers, 114, 1–69. – https://ve42.co/Jin2019 Demler, M. (2018). Mythic Multiplies in a Flash. The Microprocessor Report. – https://ve42.co/Demler2018 Aspinity (2021). Blog post: 5 Myths About AnalogML. – https://ve42.co/Aspinity. Wright, L. et al. (2022). Deep physical neural networks trained with backpropagation. Nature, 601, 49–555. – https://ve42.co/Wright2022 Waldrop, M. M. (2016). The chips are down for Moore’s law. Nature, 530144–147. – https://ve42.co/Waldrop2016
▀▀▀ Special thanks to Patreon supporters: Kelly Snook, TTST, Ross McCawley, Balkrishna Heroor, 65square.com, Chris LaClair, Avi Yashchin, John H. Austin, Jr., OnlineBookClub.org, Dmitry Kuzmichev, Matthew Gonzalez, Eric Sexton, john kiehl, Anton Ragin, Benedikt Heinen, Diffbot, Micah Mangione, MJP, Gnare, Dave Kircher, Burt Humburg, Blake Byers, Dumky, Evgeny Skvortsov, Meekay, Bill Linder, Paul Peijzel, Josh Hibschman, Mac Malkawi, Michael Schneider, jim buckmaster, Juan Benet, Ruslan Khroma, Robert Blum, Richard Sundvall, Lee Redden, Vincent, Stephen Wilcox, Marinus Kuivenhoven, Clayton Greenwell, Michael Krugman, Cy ‘kkm’ K’Nelson, Sam Lutfi, Ron Neal.
Additive manufacturing, or 3D printing, can create custom parts for electromagnetic devices on-demand and at a low cost. These devices are highly sensitive, and each component requires precise fabrication. Until recently, though, the only way to diagnose printing errors was to make, measure and test a device or to use in-line simulation, both of which are computationally expensive and inefficient.
To remedy this, a research team co-led by Penn State created a first-of-its-kind methodology for diagnosing printing errors with machine learning in real time. The researchers describe this framework—published in Additive Manufacturing —as a critical first step toward correcting 3D-printing errors in real time. According to the researchers, this could make printing for sensitive devices much more effective in terms of time, cost and computational bandwidth.
“A lot of things can go wrong during the additive manufacturing process for any component,” said Greg Huff, associate professor of electrical engineering at Penn State. “And in the world of electromagnetics, where dimensions are based on wavelengths rather than regular units of measure, any small defect can really contribute to large-scale system failures or degraded operations. If 3D printing a household item is like tuning a tuba—which can be done with broad adjustments—3D-printing devices functioning in the electromagnetic domain is like tuning a violin: Small adjustments really matter.”
— About ColdFusion – ColdFusion is an Australian based online media company independently run by Dagogo Altraide since 2009. Topics cover anything in science, technology, history and business in a calm and relaxed environment.
In this video we look at the structure of the US Social Security system and how it closely resembles aa Ponzi Scheme. Current retirees are paid from payroll taxes collected from existing workers. In recent years the demographic trends in the US have deteriorated significantly and Social Security expected to become insolvent by 2033.
Concetta Antico is the world’s most famous tetrachromat, meaning she has four types of color receptors (cone cells) in her eyes. Most of us have three types. As a result of this mutation, Antico can see around 100 million colors, 100 times more than other people. Antico is an artist and she says that her psychedelic color paintings depict what she perceives. I wonder though what her paintings look like through her eyes. From The Guardian:
According to Dr Kimberly Jameson, a University of California scientist who has studied Antico, just having the gene – which around 15% of women have – is not alone sufficient to be a tetrachromat, but it’s a necessary condition. “In Concetta’s case … one thing we believe is that because she’s been painting sort of continuously since the age of seven years old, she has really enlisted this extra potential and used it. This is how genetics works: it gives you the potential to do things and if the environment demands that you do that thing, then the genes kick in.”[…]
While the natural world is a positive stimulant for Antico, many man-made environments, such as a large shopping centre with fluorescent lighting, have the opposite effect. “I feel very uneasy. I actually avoid going into those kinds of buildings unless I absolutely have to,” she says. “I don’t enjoy the barrage, the massive onslaught of bits of unattractive colour. I mean, there’s a difference between looking at a row of stuff in a grocery store and looking at a row of trees. It’s like, it’s ugly, and the lights are garish. It makes me not happy.”
Music by Graham Haerther (http://www.Haerther.net) Audio editing by Eric Schneider. Motion graphics by Vincent de Langen. Thumbnail by Simon Buckmaster. Writing & Direction by Evan.
This includes a paid sponsorship which had no part in the writing, editing, or production of the rest of the video.
If you download music online, you can get accompanying information embedded into the digital file that might tell you the name of the song, its genre, the featured artists on a given track, the composer, and the producer. Similarly, if you download a digital photo, you can obtain information that may include the time, date, and location at which the picture was taken. That led Mustafa Doga Dogan to wonder whether engineers could do something similar for physical objects. “That way,” he mused, “we could inform ourselves faster and more reliably while walking around in a store or museum or library.”
The idea, at first, was a bit abstract for Dogan, a 4th-year Ph.D. student in the MIT Department of Electrical Engineering and Computer Science. But his thinking solidified in the latter part of 2020 when he heard about a new smartphone model with a camera that utilizes the infrared (IR) range of the electromagnetic spectrum that the naked eye can’t perceive. IR light, moreover, has a unique ability to see through certain materials that are opaque to visible light. It occurred to Dogan that this feature, in particular, could be useful.
The concept he has since come up with—while working with colleagues at MIT’s Computer Science and Artificial Intelligence Lab (CSAIL) and a research scientist at Facebook—is called InfraredTags. In place of the standard barcodes affixed to products, which may be removed or detached or become otherwise unreadable over time, these tags are unobtrusive (due to the fact that they are invisible) and far more durable, given that they’re embedded within the interior of objects fabricated on standard 3D printers.