Toggle light / dark theme

How does cold milk disperse when it is dripped into hot coffee? Even the fastest supercomputers are unable to perform the necessary calculations with high precision because the underlying quantum physical processes are extremely complex.

In 1982, Nobel Prize-winning physicist Richard Feynman suggested that, instead of using conventional computers, such questions are better solved using a quantum computer, which can simulate the quantum physical processes efficiently—a quantum simulator. With the rapid progress now being made in the development of quantum computers, Feynman’s vision could soon become a reality.

Together with researchers from Google and universities in five countries, Andreas Läuchli and Andreas Elben, two at PSI, have built and successfully tested a new type of digital–analog quantum simulator.

Aurora, the exascale supercomputer at Argonne National Laboratory, is now available to researchers worldwide, as announced by the system’s operators from the U.S. Department of Energy on January 28, 2025. One of the goals for Aurora is to train large language models for science.

According to official reports, among the world’s fastest supercomputers, there are currently only three systems that reach at least one exaflop. An exaflop is a quintillion (10¹⁸) calculations per second—that’s like a regular calculator computing continuously for 31 billion years, but completing everything in just a single second. Or, to put it briefly: exaflop supercomputers are incredibly fast.

The fastest among the swift three is El Capitan at the Lawrence Livermore National Laboratory with 1.742 exaflops per second under the HPL benchmark (High-Performance Linpack, a standardized test for measuring the computing power of supercomputers). It is followed by Frontier with 1.353 exaflops/s at the Oak Ridge National Laboratory. The trio is completed by Aurora with 1.012 exaflops/s. Incidentally, all three laboratories belong to the U.S. Department of Energy (DOE).

Quantum computing represents a paradigm shift in computation with the potential to revolutionize scientific discovery and technological innovation. This seminar will examine the roadmap for constructing quantum supercomputers, emphasizing the integration of quantum processors with traditional high-performance computing (HPC) systems. The seminar will be led by prominent experts Prof. John Martinis (Qolab), Dr. Masoud Mohseni (HPE), and Dr. Yonatan Cohen (Quantum Machines), who will discuss the critical hurdles and opportunities in scaling quantum computing, drawing upon their latest research publication, “How to Build a Quantum Supercomputer: Scaling Challenges and Opportunities”

In a milestone that brings quantum computing tangibly closer to large-scale practical use, scientists at Oxford University Physics have demonstrated the first instance of distributed quantum computing.

Using a photonic network interface, they successfully linked two separate quantum processors to form a single, fully connected quantum computer, paving the way to tackling computational challenges previously out of reach. The results were published on 5 Feb in Nature.

The breakthrough addresses quantum’s ‘scalability problem’: a quantum computer powerful enough to be industry-disrupting would have to be capable of processing millions of qubits. Packing all these processors in a single device, however, would require a machine of an immense size.

The concept of computational consciousness and its potential impact on humanity is a topic of ongoing debate and speculation. While Artificial Intelligence (AI) has made significant advancements in recent years, we have not yet achieved a true computational consciousness capable of replicating the complexities of the human mind.

AI technologies are becoming increasingly sophisticated, performing tasks that were once exclusive to human intelligence. However, fundamental differences remain between AI and human consciousness. Human cognition is not purely computational; it encompasses emotions, subjective experiences, self-awareness, and other dimensions that machines have yet to replicate.

The rise of advanced AI systems will undoubtedly transform society, reshaping how we work, communicate, and interact with the digital world. AI enhances human capabilities, offering powerful tools for solving complex problems across diverse fields, from scientific research to healthcare. However, the ethical implications and potential risks associated with AI development must be carefully considered. Responsible AI deployment, emphasizing fairness, transparency, and accountability, is crucial.

In this evolving landscape, ETER9 introduces an avant-garde and experimental approach to AI-driven social networking. It redefines digital presence by allowing users to engage with AI entities known as ‘noids’ — autonomous digital counterparts designed to extend human presence beyond time and availability. Unlike traditional virtual assistants, noids act as independent extensions of their users, continuously learning from interactions to replicate communication styles and behaviors. These AI-driven entities engage with others, generate content, and maintain a user’s online presence, ensuring a persistent digital identity.

ETER9’s noids are not passive simulations; they dynamically evolve, fostering meaningful interactions and expanding the boundaries of virtual existence. Through advanced machine learning algorithms, they analyze user input, adapt to personal preferences, and refine their responses over time, creating an AI representation that closely mirrors its human counterpart. This unique integration of AI and social networking enables users to sustain an active online presence, even when they are not physically engaged.

The advent of autonomous digital counterparts in platforms like ETER9 raises profound questions about identity and authenticity in the digital age. While noids do not possess true consciousness, they provide a novel way for individuals to explore their own thoughts, behaviors, and social interactions. Acting as digital mirrors, they offer insights that encourage self-reflection and deeper understanding of one’s digital footprint.

As this frontier advances, it is essential to approach the development and interaction with digital counterparts thoughtfully. Issues such as privacy, data security, and ethical AI usage must be at the forefront. ETER9 is committed to ensuring user privacy and maintaining high ethical standards in the creation and functionality of its noids.

ETER9’s vision represents a paradigm shift in human-AI relationships. By bridging the gap between physical and virtual existence, it provides new avenues for creativity, collaboration, and self-expression. As we continue to explore the potential of AI-driven digital counterparts, it is crucial to embrace these innovations with mindful intent, recognizing that while AI can enhance and extend our digital presence, it is our humanity that remains the core of our existence.

As ETER9 pushes the boundaries of AI and virtual presence, one question lingers:

— Could these autonomous digital counterparts unlock deeper insights into human consciousness and the nature of our identity in the digital era?

© 2025 __Ӈ__

This breakthrough overcomes a major challenge—scalability—by allowing small quantum devices to work together rather than trying to cram millions of qubits into a single machine. Using photonic links, they achieved quantum teleportation of logical gates across modules, essentially “wiring” them together. This distributed approach mirrors how supercomputers function, offering a flexible and upgradeable system.

First Distributed Quantum Computer

In a major step toward making quantum computing practical on a large scale, scientists at Oxford University Physics have successfully demonstrated distributed quantum computing for the first time. By connecting two separate quantum processors using a photonic network interface, they effectively created a single, fully integrated quantum computer. This breakthrough opens the door to solving complex problems that were previously impossible to tackle. Their findings were published today (February 5) in Nature.

OpenAI on Thursday said the U.S. National Laboratories will be using its latest artificial intelligence models for scientific research and nuclear weapons security.

Under the agreement, up to 15,000 scientists working at the National Laboratories may be able to access OpenAI’s reasoning-focused o1 series. OpenAI will also work with Microsoft, its lead investor, to deploy one of its models on Venado, the supercomputer at Los Alamos National Laboratory, according to a release. Venado is powered by technology from Nvidia and Hewlett-Packard Enterprise.

The field of quantum computing is advancing relentlessly: equipped with a performance that far exceeds that of our conventional PCs, the high-tech computers of the future will solve highly complex problems that have so far defeated even the largest supercomputers. And indeed, Chinese researchers have now made another breakthrough in the digital world of qubits – and with the Zuchongzhi 3.0, they have presented a quantum computer that even rivals Google’s Willow! But what can the new high-tech computer do? How does a quantum computer work anyway? And above all, how will the high-performance computers change our everyday lives?

“The projects running on Aurora represent some of the most ambitious and innovative science happening today,” said Katherine Riley, ALCF director of science. “From modeling extremely complex physical systems to processing huge amounts of data, Aurora will accelerate discoveries that deepen our understanding of the world around us.”

On the hardware side, Aurora clearly impresses. The supercomputer comprises 166 racks, each holding 64 blades, for a total of 10,624 blades. Each blade contains two Xeon Max processors with 64 GB of HBM2E memory onboard and six Intel Data Center Max ‘Ponte Vecchio’ GPUs, all cooled by a specialized liquid-cooling system.

In total, Aurora has 21,248 CPUs with over 1.1 million high-performance x86 cores, 19.9 PB of DDR5 memory, and 1.36 PB of HBM2E memory attached to the CPUs. It also features 63,744 GPUs optimized for AI and HPC equipped with 8.16 PB of HBM2E memory. Aurora uses 1,024 nodes with solid-state drives for storage, offering 220 PB of total capacity and 31 TB/s of bandwidth. The machine relies on HPE’s Shasta supercomputer architecture with Slingshot interconnects.