One nebulous aspect of the poll, and of many of the headlines about AI we see on a daily basis, is how the technology is defined. What are we referring to when we say “AI”? The term encompasses everything from recommendation algorithms that serve up content on YouTube and Netflix, to large language models like ChatGPT, to models that can design incredibly complex protein architectures, to the Siri assistant built into many iPhones.
IBM’s definition is simple: “a field which combines computer science and robust datasets to enable problem-solving.” Google, meanwhile, defines it as “a set of technologies that enable computers to perform a variety of advanced functions, including the ability to see, understand and translate spoken and written language, analyze data, make recommendations, and more.”
It could be that peoples’ fear and distrust of AI comes partly from a lack of understanding of it, and a stronger focus on unsettling examples than positive ones. The AI that can design complex proteins may help scientists discover stronger vaccines and other drugs, and could do so on a vastly accelerated timeline.
“Intelligence supposes goodwill,” Simone de Beauvoir wrote in the middle of the twentieth century. In the decades since, as we have entered a new era of technology risen from our minds yet not always consonant with our values, this question of goodwill has faded dangerously from the set of considerations around artificial intelligence and the alarming cult of increasingly advanced algorithms, shiny with technical triumph but dull with moral insensibility.
In De Beauvoir’s day, long before the birth of the Internet and the golden age of algorithms, the visionary mathematician, philosopher, and cybernetics pioneer Norbert Wiener (November 26, 1894–March 18, 1964) addressed these questions with astounding prescience in his 1954 book The Human Use of Human Beings, the ideas in which influenced the digital pioneers who shaped our present technological reality and have recently been rediscovered by a new generation of thinkers eager to reinstate the neglected moral dimension into the conversation about artificial intelligence and the future of technology.
A decade after The Human Use of Human Beings, Wiener expanded upon these ideas in a series of lectures at Yale and a philosophy seminar at Royaumont Abbey near Paris, which he reworked into the short, prophetic book God & Golem, Inc. (public library). Published by MIT Press in the final year of his life, it won him the posthumous National Book Award in the newly established category of Science, Philosophy, and Religion the following year.
“It’s an interesting new approach,” says Peter Sanders, who studies the design and implementation of efficient algorithms at the Karlsruhe Institute of Technology in Germany and who was not involved in the work. “Sorting is still one of the most widely used subroutines in computing,” he says.
DeepMind published its results in Nature today. But the techniques that AlphaDev discovered are already being used by millions of software developers. In January 2022, DeepMind submitted its new sorting algorithms to the organization that manages C++, one of the most popular programming languages in the world, and after two months of rigorous independent vetting, AlphaDev’s algorithms were added to the language. This was the first change to C++’s sorting algorithms in more than a decade and the first update ever to involve an algorithm discovered using AI.
Sorting algorithms are basic functions used constantly by computers around the world, so an improved one created by an artificial intelligence could make millions of programs run faster.
Digital society is driving increasing demand for computation, and energy use. For the last five decades, we relied on improvements in hardware to keep pace. But as microchips approach their physical limits, it’s critical to improve the code that runs on them to make computing more powerful and sustainable. This is especially important for the algorithms that make up the code running trillions of times a day.
In our paper published today in Nature, we introduce AlphaDev, an artificial intelligence (AI) system that uses reinforcement learning to discover enhanced computer science algorithms – surpassing those honed by scientists and engineers over decades.
The team used publicly available neural network algorithms to program the robotic chef to pick up recipes.
Robots are the future of many industries. Robots are being trained all over the world to perform a wide range of tasks more meticulously — be it cleaning or playing football.
Mastering the art of cooking is one task that still has a long way to go. However, robots will soon pick up on this human skill.
General relativity, part of the wide-ranging physical theory of relativity formed by the German-born physicist Albert Einstein. It was conceived by Einstein in 1915. It explains gravity based on the way space can ‘curve’, or, to put it more accurately, it associates the force of gravity with the changing geometry of space-time. (Einstein’s gravity)
The mathematical equations of Einstein’s general theory of relativity, tested time and time again, are currently the most accurate way to predict gravitational interactions, replacing those developed by Isaac Newton several centuries prior.
Over the last century, many experiments have confirmed the validity of both special and general relativity. In the first major test of general relativity, astronomers in 1919 measured the deflection of light from distant stars as the starlight passed by our sun, proving that gravity does, in fact, distort or curve space.
Daniel Lidar, the Viterbi Professor of Engineering at USC and Director of the USC Center for Quantum Information Science & Technology, and Dr. Bibek Pokharel, a Research Scientist at IBM Quantum, have achieved a quantum speedup advantage in the context of a “bitstring guessing game.” They managed strings up to 26 bits long, significantly larger than previously possible, by effectively suppressing errors typically seen at this scale. (A bit is a binary number that is either zero or one). Their paper is published in the journal Physical Review Letters.
Quantum computers promise to solve certain problems with an advantage that increases as the problems increase in complexity. However, they are also highly prone to errors, or noise. The challenge, says Lidar, is “to obtain an advantage in the real world where today’s quantum computers are still ‘noisy.’” This noise-prone condition of current quantum computing is termed the “NISQ” (Noisy Intermediate-Scale Quantum) era, a term adapted from the RISC architecture used to describe classical computing devices. Thus, any present demonstration of quantum speed advantage necessitates noise reduction.
The more unknown variables a problem has, the harder it usually is for a computer to solve. Scholars can evaluate a computer’s performance by playing a type of game with it to see how quickly an algorithm can guess hidden information. For instance, imagine a version of the TV game Jeopardy, where contestants take turns guessing a secret word of known length, one whole word at a time. The host reveals only one correct letter for each guessed word before changing the secret word randomly.
The mammalian retina is a complex system consisting out of cones (for color) and rods (for peripheral monochrome) that provide the raw image data which is then processed into successive layers of neurons before this preprocessed data is sent via the optical nerve to the brain’s visual cortex. In order to emulate this system as closely as possible, researchers at Penn State University have created a system that uses perovskite (methylammonium lead bromide, MAPbX3) RGB photodetectors and a neuromorphic processing algorithm that performs similar processing as the biological retina.
Panchromatic imaging is defined as being ‘sensitive to light of all colors in the visible spectrum’, which in imaging means enhancing the monochromatic (e.g. RGB) channels using panchromatic (intensity, not frequency) data. For the retina this means that the incoming light is not merely used to determine the separate colors, but also the intensity, which is what underlies the wide dynamic range of the Mark I eyeball. In this experiment, layers of these MAPbX3 (X being Cl, Br, I or combination thereof) perovskites formed stacked RGB sensors.
The output of these sensor layers was then processed in a pretrained convolutional neural network, to generate the final, panchromatic image which could then be used for a wide range of purposes. Some applications noted by the researchers include new types of digital cameras, as well as artificial retinas, limited mostly by how well the perovskite layers scale in resolution, and their longevity, which is a long-standing issue with perovskites. Another possibility raised is that of powering at least part of the system using the energy collected by the perovskite layers, akin to proposed perovskite-based solar panels.
Joscha Bach is a cognitive scientist focusing on cognitive architectures, consciousness, models of mental representation, emotion, motivation and sociality.
0:00:00 Introduction. 0:00:17 Bach’s work ethic / daily routine. 0:01:35 What is your definition of truth? 0:04:41 Nature’s substratum is a “quantum graph”? 0:06:25 Mathematics as the descriptor of all language. 0:13:52 Why is constructivist mathematics “real”? What’s the definition of “real”? 0:17:06 What does it mean to “exist”? Does “pi” exist? 0:20:14 The mystery of something vs. nothing. Existence is the default. 0:21:11 Bach’s model vs. the multiverse. 0:26:51 Is the universe deterministic. 0:28:23 What determines the initial conditions, as well as the rules? 0:30:55 What is time? Is time fundamental? 0:34:21 What’s the optimal algorithm for finding truth? 0:40:40 Are the fundamental laws of physics ultimately “simple”? 0:50:17 The relationship between art and the artist’s cost function. 0:54:02 Ideas are stories, being directed by intuitions. 0:58:00 Society has a minimal role in training your intuitions. 0:59:24 Why does art benefit from a repressive government? 1:04:01 A market case for civil rights. 1:06:40 Fascism vs communism. 1:10:50 Bach’s “control / attention / reflective recall” model. 1:13:32 What’s more fundamental… Consciousness or attention? 1:16:02 The Chinese Room Experiment. 1:25:22 Is understanding predicated on consciousness? 1:26:22 Integrated Information Theory of consciousness (IIT) 1:30:15 Donald Hoffman’s theory of consciousness. 1:32:40 Douglas Hofstadter’s “strange loop” theory of consciousness. 1:34:10 Holonomic Brain theory of consciousness. 1:34:42 Daniel Dennett’s theory of consciousness. 1:36:57 Sensorimotor theory of consciousness (embodied cognition) 1:44:39 What is intelligence? 1:45:08 Intelligence vs. consciousness. 1:46:36 Where does Free Will come into play, in Bach’s model? 1:48:46 The opposite of free will can lead to, or feel like, addiction. 1:51:48 Changing your identity to effectively live forever. 1:59:13 Depersonalization disorder as a result of conceiving of your “self” as illusory. 2:02:25 Dealing with a fear of loss of control. 2:05:00 What about heart and conscience? 2:07:28 How to test / falsify Bach’s model of consciousness. 2:13:46 How has Bach’s model changed in the past few years? 2:14:41 Why Bach doesn’t practice Lucid Dreaming anymore. 2:15:33 Dreams and GAN’s (a machine learning framework) 2:18:08 If dreams are for helping us learn, why don’t we consciously remember our dreams. 2:19:58 Are dreams “real”? Is all of reality a dream? 2:20:39 How do you practically change your experience to be most positive / helpful? 2:23:56 What’s more important than survival? What’s worth dying for? 2:28:27 Bach’s identity. 2:29:44 Is there anything objectively wrong with hating humanity? 2:30:31 Practical Platonism. 2:33:00 What “God” is. 2:36:24 Gods are as real as you, Bach claims. 2:37:44 What “prayer” is, and why it works. 2:41:06 Our society has lost its future and thus our culture. 2:43:24 What does Bach disagree with Jordan Peterson about? 2:47:16 The millennials are the first generation that’s authoritarian since WW2 2:48:31 Bach’s views on the “social justice” movement. 2:51:29 Universal Basic Income as an answer to social inequality, or General Artificial Intelligence? 2:57:39 Nested hierarchy of “I“s (the conflicts within ourselves) 2:59:22 In the USA, innovation is “cheating” (for the most part) 3:02:27 Activists are usually operating on false information. 3:03:04 Bach’s Marxist roots and lessons to his former self. 3:08:45 BONUS BIT: On societies problems.
Subscribe if you want more conversations on Theories of Everything, Consciousness, Free Will, God, and the mathematics / physics of each.
I’m producing an imminent documentary Better Left Unsaid http://betterleftunsaidfilm.com on the topic of “when does the left go too far?” Visit that site if you’d like to contribute to getting the film distributed (in 2020) and seeing more conversations like this.