Toggle light / dark theme

Andrej Karpathy: Software Is Changing (Again)

Andrej Karpathy’s keynote at AI Startup School in San Francisco. Slides provided by Andrej: https://drive.google.com/file/d/1a0h1mkwfmV2PlekxDN8isMrDA5evc4wW

Drawing on his work at Stanford, OpenAI, and Tesla, Andrej sees a shift underway. Software is changing, again. We’ve entered the era of “Software 3.0,” where natural language becomes the new programming interface and models do the rest.

He explores what this shift means for developers, users, and the design of software itself— that we’re not just using new tools, but building a new kind of computer.

More content from Andrej: / @andrejkarpathy.

Chapters and Thoughts (From Andrej Karpathy!)
0:00 — Imo fair to say that software is changing quite fundamentally again. LLMs are a new kind of computer, and you program them *in English*. Hence I think they are well deserving of a major version upgrade in terms of software.
6:06 — LLMs have properties of utilities, of fabs, and of operating systems → New LLM OS, fabbed by labs, and distributed like utilities (for now). Many historical analogies apply — imo we are computing circa ~1960s.
14:39 — LLM psychology: LLMs = \.

Scientists propose blueprint for ‘universal translator’ in quantum networks

UBC researchers are proposing a solution to a key hurdle in quantum networking: a device that can “translate” microwave to optical signals and vice versa.

The technology could serve as a universal translator for quantum computers—enabling them to talk to one another over long distances and converting up to 95% of a signal with virtually no noise. And it all fits on a , the same material found in everyday computers.

“It’s like finding a translator that gets nearly every word right, keeps the message intact and adds no background chatter,” says study author Mohammad Khalifa, who conducted the research during his Ph.D. at UBC’s faculty of applied science and the Stewart Blusson Quantum Matter Institute (SBQMI).

Neuron–astrocyte associative memory

For decades, scientists believed that glial cells—the brain’s “support staff”—were just passive helpers to the neurons that do the heavy lifting of thinking and remembering. But that view is rapidly changing.


Astrocytes, the most abundant type of glial cell, play a fundamental role in memory. Despite most hippocampal synapses being contacted by an astrocyte, there are no current theories that explain how neurons, synapses, and astrocytes might collectively contribute to memory function. We demonstrate that fundamental aspects of astrocyte morphology and physiology naturally lead to a dynamic, high-capacity associative memory system. The neuron–astrocyte networks generated by our framework are closely related to popular machine learning architectures known as Dense Associative Memories. Adjusting the connectivity pattern, the model developed here leads to a family of associative memory networks that includes a Dense Associative Memory and a Transformer as two limiting cases.

Traversal Emerges From Stealth With $48 Million From Sequoia And Perkins To Reimagine Site Reliability In The AI Era

With more code created by AI, there is more surface area to troubleshoot. There is a need for AI to autonomously troubleshoot, mediate and even prevent complex incidents at scale—self-healing codegen.

Artificial neural networks reveal how peripersonal neurons represent the space around the body

The brains of humans and other primates are known to execute various sophisticated functions, one of which is the representation of the space immediately surrounding the body. This area, also sometimes referred to as “peripersonal space,” is where most interactions between people and their surrounding environment typically take place.

Researchers at Chinese Academy of Sciences, Italian Institute of Technology (IIT) and other institutes recently investigated the neural processes through which the brain represents the area around the body, using brain-inspired computational models. Their findings, published in Nature Neuroscience, suggest that receptive fields surrounding different parts of the body contribute to building a modular model of the space immediately surrounding a person or (AI) agent.

“Our journey into this field began truly serendipitously, during unfunded experiments done purely out of curiosity,” Giandomenico Iannetti, senior author of the paper, told Medical Xpress. “We discovered that the hand-blink reflex, which is evoked by electrically shocking the hand, was strongly modulated by the position of the hand with respect to the eye.

Electron microscopy technique captures nanoparticle organizations to forge new materials

A research team including members from the University of Michigan have unveiled a new observational technique that’s sensitive to the dynamics of the intrinsic quantum jiggles of materials, or phonons.

This work will help scientists and engineers better design metamaterials—substances that possess exotic properties that rarely exist in nature—that are reconfigurable and made from solutions containing nanoparticles that self-assemble into larger structures, the researchers said. These materials have wide-ranging applications, from shock absorption to devices that guide acoustic and optical energy in high-powered computer applications.

“This opens a new research area where nanoscale building blocks—along with their intrinsic optical, electromagnetic and —can be incorporated into mechanical metamaterials, enabling emerging technologies in multiple fields from robotics and mechanical engineering to information technology,” said Xiaoming Mao, U-M professor of physics and co-author of the new study.

Light-based computing with optical fibers shows potential for ultra-fast AI systems

Imagine a computer that does not rely only on electronics but uses light to perform tasks faster and more efficiently. A collaboration between two research teams from Tampere University in Finland and Université Marie et Louis Pasteur in France have now demonstrated a novel way of processing information using light and optical fibers, opening up the possibility of building ultra-fast computers. The studies are published in Optics Letters and on the arXiv preprint server.

The research was performed by postdoctoral researchers Dr. Mathilde Hary from Tampere University and Dr. Andrei Ermolaev from the Université Marie et Louis Pasteur, Besançon, demonstrated how inside thin glass fibers can mimic the way artificial intelligence (AI) processes information. Their work has investigated a particular class of computing architecture known as an Extreme Learning Machine, an approach inspired by neural networks.

“Instead of using conventional electronics and algorithms, computation is achieved by taking advantage of the nonlinear interaction between intense light pulses and the glass,” Hary and Ermolaev explain.

Three-dimensional reconstruction of inertial confinement fusion hot-spot plasma from x-ray and nuclear diagnostics on OMEGA

Multidimensional effects degrade the neutron yield and the compressed areal density of laser-direct-drive inertial confinement fusion implosions of layered deuterium–tritium cryogenic targets on the OMEGA Laser System with respect to 1D radiation-hydrodynamic simulation predictions. A comprehensive physics-informed 3D reconstruction effort is under way to infer hot-spot and shell conditions at stagnation from four x-ray and seven neutron detectors distributed around the OMEGA target chamber. Neutron diagnostics, providing measurements of the neutron yield, hot-spot flow velocity, and apparent ion-temperature distribution, are used to infer the mode-1 perturbation at stagnation. The x-ray imagers record the shape of the hot-spot plasma to diagnose mode-1 and mode-2 perturbations. A deep-learning convolutional neural network trained on an extensive set of 3D radiation-hydrodynamic simulations is used to interpret the x-ray and nuclear measurements to infer the 3D profiles of the hot-spot plasma conditions and the amount of laser energy coupled to the hot-spot plasma. A 3D simulation database shows that larger mode-1 asymmetries are correlated with higher hot-spot flow velocities and reduced laser-energy coupling and neutron yield. Three-dimensional hot-spot reconstructions from x-ray measurements indicate that higher amounts of residual kinetic energy are correlated with higher measured hot-spot flow velocities, consistent with 3D simulations.

Super Humanity | How AI Will Transform Us

Super Humanity — Imagine if your brain could interface directly with AI.
Super Humanity explores the revolutionary intersection of neuroscience and technology, revealing a future where artificial intelligence integrates effortlessly with human thought.

Super Humanity (2019)
Director: Ruth Chao.
Writers: Ruth Chao, Paula Cons, Alphonse de la Puente.
Genre: Documentary, Sci-Fi.
Country: Portugal, Spain.
Language: English.
Release Date: December 27, 2019 (Spain)

Synopsis:
The convergence of human brains and AI will create a new breed of humanity—often described as ‘super-humanity.’

By enabling brain-machine interfaces, human cognitive powers will be amplified, marking the dawn of enhanced humans. Connected minds will unlock advanced synthetic telepathy, offering not only the ability to perceive others’ thoughts but also to influence them. Yet, what are the advantages and dangers posed by these groundbreaking advancements?

Neurotechnology stands at the threshold of a societal transformation, reshaping our concepts of identity and reality itself. The establishment of neuro-rights will be crucial, requiring laws that protect the privacy of our conscious and even subconscious minds.

Mind Forward delves deeply into the potential of this new frontier.