Toggle light / dark theme

From spin glasses to quantum codes: Researchers develop optimal error correction algorithm

Scientists have developed an exact approach to a key quantum error correction problem once believed to be unsolvable, and have shown that what appeared to be hardware-related errors may in fact be due to suboptimal decoding.

The , called PLANAR, achieved a 25% reduction in logical error rates when applied to Google Quantum AI’s experimental data. This discovery revealed that a quarter of what the tech giant attributed to an “error floor” was actually caused by their decoding method, rather than genuine hardware limitations.

Quantum computers are extraordinarily sensitive to errors, making essential for practical applications.

Vision-language model creates plans for automated inspection of environments

Recent advances in the field of robotics have enabled the automation of various real-world tasks, ranging from the manufacturing or packaging of goods in many industry settings to the precise execution of minimally invasive surgical procedures. Robots could also be helpful for inspecting infrastructure and environments that are hazardous or difficult for humans to access, such as tunnels, dams, pipelines, railways and power plants.

Despite their promise for the safe assessment of real-world environments, currently, most inspections are still carried out by human agents. In recent years, some computer scientists have been trying to develop computational models that can effectively plan the trajectories that robots should follow when inspecting specific environments and ensure that they execute actions that will allow them to complete desired missions.

Researchers at Purdue University and LightSpeed Studios recently introduced a new training-free computational technique for generating plans based on written descriptions, which could guide the movements of robots as they inspect specific environments. Their proposed approach, outlined in a paper published on the arXiv preprint server, specifically relies on vision-language models (VLMs), which can process both images and written texts.

Multicore fiber testbed demonstrates precise optical clock signal transmission over 25 km

Researchers have shown, for the first time, that transmission of ultrastable optical signals from optical clocks across tens of kilometers of deployed multicore fiber is compatible with simultaneous transmission of telecommunications data.

The achievement demonstrates that these emerging high-capacity fiber optic networks could be used to connect optical clocks at various locations, enabling new scientific applications.

As global data demands continue to surge, multicore fiber is being installed to help overcome the limits of existing networks. These fibers pack multiple light-guiding cores into a single strand, greatly increasing capacity for applications like streaming, finance and artificial intelligence.

MIT’s Optical AI Chip That Could Revolutionize 6G at the Speed of Light

As more connected devices require greater bandwidth for activities like teleworking and cloud computing, managing the limited wireless spectrum shared by all users is becoming increasingly difficult.

To address this, engineers are turning to artificial intelligence.

Elon Musk: Digital Superintelligence, Multiplanetary Life, How to Be Useful

A fireside with Elon Musk at AI Startup School in San Francisco.

Before rockets and robots, Elon Musk was drilling holes through his office floor to borrow internet. In this candid talk, he walks through the early days of Zip2, the Falcon 1 launches that nearly ended SpaceX, and the “miracle” of Tesla surviving 2008.

He shares the thinking that guided him—building from first principles, doing useful things, and the belief that we’re in the middle of an intelligence big bang.

Chapters:

00:00 — Intro.
01:25 — His origin story.
02:00 — Dream to help build the internet.
04:40 — Zip2 and lessons learned.
08:00 — PayPal.
14:30 — Origin of SpaceX
18:30 — Building rockets from first principles.
23:50 — Lessons in leadership.
27:10 — Building up xAI
39:00 — Super intelligence and synthetic data.
39:30 — Multi-planetary future.
43:00 — Nueralink, AI safety and the singularity.

Andrej Karpathy: Software Is Changing (Again)

Andrej Karpathy’s keynote at AI Startup School in San Francisco. Slides provided by Andrej: https://drive.google.com/file/d/1a0h1mkwfmV2PlekxDN8isMrDA5evc4wW

Drawing on his work at Stanford, OpenAI, and Tesla, Andrej sees a shift underway. Software is changing, again. We’ve entered the era of “Software 3.0,” where natural language becomes the new programming interface and models do the rest.

He explores what this shift means for developers, users, and the design of software itself— that we’re not just using new tools, but building a new kind of computer.

More content from Andrej: / @andrejkarpathy.

Chapters and Thoughts (From Andrej Karpathy!)
0:00 — Imo fair to say that software is changing quite fundamentally again. LLMs are a new kind of computer, and you program them *in English*. Hence I think they are well deserving of a major version upgrade in terms of software.
6:06 — LLMs have properties of utilities, of fabs, and of operating systems → New LLM OS, fabbed by labs, and distributed like utilities (for now). Many historical analogies apply — imo we are computing circa ~1960s.
14:39 — LLM psychology: LLMs = \.

Scientists propose blueprint for ‘universal translator’ in quantum networks

UBC researchers are proposing a solution to a key hurdle in quantum networking: a device that can “translate” microwave to optical signals and vice versa.

The technology could serve as a universal translator for quantum computers—enabling them to talk to one another over long distances and converting up to 95% of a signal with virtually no noise. And it all fits on a , the same material found in everyday computers.

“It’s like finding a translator that gets nearly every word right, keeps the message intact and adds no background chatter,” says study author Mohammad Khalifa, who conducted the research during his Ph.D. at UBC’s faculty of applied science and the Stewart Blusson Quantum Matter Institute (SBQMI).

Neuron–astrocyte associative memory

For decades, scientists believed that glial cells—the brain’s “support staff”—were just passive helpers to the neurons that do the heavy lifting of thinking and remembering. But that view is rapidly changing.


Astrocytes, the most abundant type of glial cell, play a fundamental role in memory. Despite most hippocampal synapses being contacted by an astrocyte, there are no current theories that explain how neurons, synapses, and astrocytes might collectively contribute to memory function. We demonstrate that fundamental aspects of astrocyte morphology and physiology naturally lead to a dynamic, high-capacity associative memory system. The neuron–astrocyte networks generated by our framework are closely related to popular machine learning architectures known as Dense Associative Memories. Adjusting the connectivity pattern, the model developed here leads to a family of associative memory networks that includes a Dense Associative Memory and a Transformer as two limiting cases.