Toggle light / dark theme

World’s Leading Scientific Supercomputing Centers Adopt NVIDIA NVQLink to Integrate Grace Blackwell Platform With Quantum Processors

NVIDIA today announced that the world’s leading scientific computing centers are adopting NVIDIA® NVQLink™, a first-of-its-kind, universal interconnect for linking quantum processors with state-of-the-art accelerated computing.

UT Eclipses 5,000 GPUs To Increase Dominance in Open-Source AI, Strengthen Nation’s Computing Power

Amid the private sector’s race to lead artificial intelligence innovation, The University of Texas at Austin has strengthened its lead in academic computing power and dominance in computing power for public, open-source AI. UT has acquired high-performance Dell PowerEdge servers and NVIDIA AI infrastructure powered by more than 4,000 NVIDIA Blackwell architecture graphic processing units (GPUs), the most powerful GPUs in production to date.

The new infrastructure is a game-changer for the University, expanding its research and development capabilities in agentic and generative AI while opening the door to more society-changing discoveries that support America’s technological dominance. The NVIDIA GB200 systems and NVIDIA Vera CPU servers will be installed as part of Horizon, the largest academic supercomputer in the nation, which goes online next year at UT’s Texas Advanced Computing Center (TACC). The National Science Foundation (NSF) is funding Horizon through its Leadership Class Computing Facility program to revolutionize U.S. computational research.

UT has the most AI computing power in academia. In total, the University has amassed more than 5,000 advanced NVIDIA GPUs across its academic and research facilities. The University has the computing power to produce open-source large language models — which power most modern AI applications — that rival any other public institution. Open-source computing is nonproprietary and serves as the backbone for publicly driven research. Unlike private sector models, it can be fine-tuned to support research in the public interest, producing discoveries that offer profound benefits to society in such areas as health care, drug development, materials and national security.

Brain organoid pioneers fear inflated claims about biocomputing could backfire

For the brain organoids in Lena Smirnova’s lab at Johns Hopkins University, there comes a time in their short lives when they must graduate from the cozy bath of the bioreactor, leave the warm, salty broth behind, and be plopped onto a silicon chip laced with microelectrodes. From there, these tiny white spheres of human tissue can simultaneously send and receive electrical signals that, once decoded by a computer, will show how the cells inside them are communicating with each other as they respond to their new environments.

More and more, it looks like these miniature lab-grown brain models are able to do things that resemble the biological building blocks of learning and memory. That’s what Smirnova and her colleagues reported earlier this year. It was a step toward establishing something she and her husband and collaborator, Thomas Hartung, are calling “organoid intelligence.”

Tead More


Another would be to leverage those functions to build biocomputers — organoid-machine hybrids that do the work of the systems powering today’s AI boom, but without all the environmental carnage. The idea is to harness some fraction of the human brain’s stunning information-processing superefficiencies in place of building more water-sucking, electricity-hogging, supercomputing data centers.

Despite widespread skepticism, it’s an idea that’s started to gain some traction. Both the National Science Foundation and DARPA have invested millions of dollars in organoid-based biocomputing in recent years. And there are a handful of companies claiming to have built cell-based systems already capable of some form of intelligence. But to the scientists who first forged the field of brain organoids to study psychiatric and neurodevelopmental disorders and find new ways to treat them, this has all come as a rather unwelcome development.

At a meeting last week at the Asilomar conference center in California, researchers, ethicists, and legal experts gathered to discuss the ethical and social issues surrounding human neural organoids, which fall outside of existing regulatory structures for research on humans or animals. Much of the conversation circled around how and where the field might set limits for itself, which often came back to the question of how to tell when lab-cultured cellular constructs have started to develop sentience, consciousness, or other higher-order properties widely regarded as carrying moral weight.

Supercomputer simulates quantum chip in unprecedented detail

A broad association of researchers from across Lawrence Berkeley National Laboratory (Berkeley Lab) and the University of California, Berkeley have collaborated to perform an unprecedented simulation of a quantum microchip, a key step forward in perfecting the chips required for this next-generation technology. The simulation used more than 7,000 NVIDIA GPUs on the Perlmutter supercomputer at the National Energy Research Scientific Computing Center (NERSC), a U.S. Department of Energy (DOE) user facility.

Modeling quantum chips allows researchers to understand their function and performance before they’re fabricated, ensuring that they work as intended and spotting any problems that might come up. Quantum Systems Accelerator (QSA) researchers Zhi Jackie Yao and Andy Nonaka of the Applied Mathematics and Computational Research (AMCR) Division at Berkeley Lab develop electromagnetic models to simulate these chips, a key step in the process of producing better quantum hardware.

“The predicts how design decisions affect electromagnetic wave propagation in the ,” said Nonaka, “to make sure proper signal coupling occurs and avoid unwanted crosstalk.”

Physicists unveil system to solve long-standing barrier to new generation of supercomputers

The dream of creating game-changing quantum computers—supermachines that encode information in single atoms rather than conventional bits—has been hampered by the formidable challenge known as quantum error correction.

In a paper published Monday in Nature, Harvard researchers demonstrated a new system capable of detecting and removing errors below a key performance threshold, potentially providing a workable solution to the problem.

“For the first time, we combined all essential elements for a scalable, error-corrected quantum computation in an integrated architecture,” said Mikhail Lukin, co-director of the Quantum Science and Engineering Initiative, Joshua and Beth Friedman University Professor, and senior author of the new paper. “These experiments—by several measures the most advanced that have been done on any quantum platform to date—create the scientific foundation for practical large-scale quantum computation.”

First full simulation of 50-qubit universal quantum computer achieved

A research team at the Jülich Supercomputing Center, together with experts from NVIDIA, has set a new record in quantum simulation: for the first time, a universal quantum computer with 50 qubits has been fully simulated—a feat achieved on Europe’s first exascale supercomputer, JUPITER, inaugurated at Forschungszentrum Jülich in September.

The result surpasses the previous world record of 48 qubits, established by Jülich researchers in 2022 on Japan’s K computer. It showcases the immense computational power of JUPITER and opens new horizons for developing and testing . The research is published on the arXiv preprint server.

Quantum computer simulations are vital for developing future quantum systems. They allow researchers to verify experimental results and test new algorithms long before powerful quantum machines become reality. Among these are the Variational Quantum Eigensolver (VQE), which can model molecules and materials, and the Quantum Approximate Optimization Algorithm (QAOA), used for optimization problems in logistics, finance, and artificial intelligence.

Nobel winner, HPE and chip industry firms team up to make a practical quantum supercomputer

John M. Martinis, one of this year’s winners of the Nobel Prize in physics for breakthroughs in quantum computing, on Monday formed an alliance with HPE and several chip firms to create a practical, mass-producible quantum supercomputer.

/* */