Bloomberg’s Mark Gurman breaks down what Dojo supercomputer project lead Ganesh Venkataramanan’s departure means for Tesla on Bloomberg Radio ——– Get more on The Tape Podcast On Apple: http://bit.ly/3YrBfOi On Spotify: http://bit.ly/3SPPZ8F Anywhere: http://bit.ly/43hOc0r
Category: supercomputing – Page 36


DARPA-Funded Research Leads to Quantum Computing Breakthrough
Some new concepts for me but interesting and a good step forward.
A team of researchers working on DARPA’s Optimization with Noisy Intermediate-Scale Quantum devices (ONISQ) program has created the first-ever quantum circuit with logical quantum bits (qubits), a key discovery that could accelerate fault-tolerant quantum computing and revolutionize concepts for designing quantum computer processors.
The ONISQ program began in 2020 seeking to demonstrate a quantitative advantage of quantum information processing by leapfrogging the performance of classical-only supercomputers to solve a particularly challenging class of problem known as combinatorial optimization. The program pursued a hybrid concept to combine intermediate-sized “noisy”— or error-prone — quantum processors with classical systems focused specifically on solving optimization problems of interest to defense and commercial industry. Teams were selected to explore various types of physical, non-logical qubits including superconducting qubits, ion qubits, and Rydberg atomic qubits.
The Harvard research team, supported by MIT, QuEra Computing, Caltech, and Princeton, focused on exploring the potential of Rydberg qubits, and in the course of their research made a major breakthrough: The team developed techniques to create error-correcting logical qubits using arrays of “noisy” physical Rydberg qubits. Logical qubits are a critical missing piece in the puzzle to realize fault-tolerant quantum computing. In contrast to error-prone physical qubits, logical qubits are error-corrected to maintain their quantum state, making them useful for solving a diverse set of complex problems.
IBM finally unveils quantum powerhouse, a 1,000+ qubit processor
With a processor that has fewer qubits, IBM has improved error correction, paving the way for the use of these processors in real life.
IBM has unveiled its much-awaited 1,000+ qubit quantum processor Condor, alongside a utility-scale processor dubbed IBM Quantum Heron at its Quantum Summit in New York. The latter is the first in the series of utility-scale quantum processors that IBM took four years to build, the company said in a press release.
Quantum computers, considered the next frontier of computing, have locked companies big and small in a race to build the platform that everybody would want to use to solve complex problems in medicine, physics, mathematics, and many more.
Even the fastest supercomputers of today are years behind the potential of quantum computers, whose capabilities keep improving with the addition of quantum bits or qubits in the processor. So, a 1,000+ qubit processor is a big deal, and even though a startup may have beaten IBM to this milestone, the latter’s announcement is still significant for what else IBM brings to the table.

Quantum computers could solve problems in minutes that would take today’s supercomputers millions of years
“We’re looking at a race, a race between China, between IBM, Google, Microsoft, Honeywell,” Kaku said. “All the big boys are in this race to create a workable, operationally efficient quantum computer. Because the nation or company that does this will rule the world economy.”
It’s not just the economy quantum computing could impact. A quantum computer is set up at Cleveland Clinic, where Chief Research Officer Dr. Serpil Erzurum believes the technology could revolutionize the world of health care.
Quantum computers can potentially model the behavior of proteins, the molecules that regulate all life, Erzurum said. Proteins change their shape to change their function in ways that are too complex to follow, but quantum computing could change that understanding.

How one national lab is getting its supercomputers ready for the AI age
OAK RIDGE, Tenn. — At Oak Ridge National Laboratory, the government-funded science research facility nestled between Tennessee’s Great Smoky Mountains and Cumberland Plateau that is perhaps best known for its role in the Manhattan Project, two supercomputers are currently rattling away, speedily making calculations meant to help tackle some of the biggest problems facing humanity.
You wouldn’t be able to tell from looking at them. A supercomputer called Summit mostly comprises hundreds of black cabinets filled with cords, flashing lights and powerful graphics processing units, or GPUs. The sound of tens of thousands of spinning disks on the computer’s file systems, and air cooling technology for ancillary equipment, make the device sound somewhat like a wind turbine — and, at least to the naked eye, the contraption doesn’t look much different from any other corporate data center. Its next-door neighbor, Frontier, is set up in a similar manner across the hall, though it’s a little quieter and the cabinets have a different design.
Yet inside those arrays of cabinets are powerful specialty chips and components capable of, collectively, training some of the largest AI models known. Frontier is currently the world’s fastest supercomputer, and Summit is the world’s seventh-fastest supercomputer, according to rankings published earlier this month. Now, as the Biden administration boosts its focus on artificial intelligence and touts a new executive order for the technology, there’s growing interest in using these supercomputers to their full AI potential.

Paradox of ultramassive black hole formation solved by supercomputer
With a gravitational field so strong that not even light can escape its grip, black holes are probably the most interesting and bizarre objects in the universe.
Due to their extreme properties, a theoretical description of these celestial bodies is impossible within the framework of Newton’s classical theory of gravity. It requires the use of general relativity, the theory proposed by Einstein in 1915, which treats gravitational fields as deformations in the fabric of space-time.
Black holes are usually formed from the collapse of massive stars during their final stage of evolution. Therefore, when a black hole is born, its mass does not exceed a few dozen solar masses.


The AI Time Machine: When Will Superintelligence Arrive?
Buckle up, because we’re entering the era of thinking machines that make humans look like chattering chimps! But don’t worry about polishing your resume to impress our future robot overlords just yet. The experts are wildly divided on when superintelligent AI will actually arrive. It’s like we’re staring at an AI time machine without knowing if it will teleport us to 2 years from now or 2 decades into the future!
In one corner, we have Mustafa Suleyman from Inflection AI. He says take a chill pill, we’ve got at least 10–20 more years before the AI apocalypse. But hang on…his company just whipped up the world’s 2nd biggest AI supercomputer! It’s cruising with 3X the horsepower of GPT-4, the chatbot with reading skills rivaling a university professor. So something tells me Suleyman’s timeline is slower than your grandma driving without her glasses.
Meanwhile, OpenAI is broadcasting a very different arrival time. They believe superintelligence could show up within just 4 years! To get ready, they’ve launched an AI safety SWAT team, led by brainiacs like Ilya Sutskever. They’re funneling millions into this initiative with a strict 2027 deadline. Why so urgent? Well, they say superintelligence could either catapult humanity into a sci-fi future utopia, or permanently reduce us to drooling toddlers. Not great options there.