Toggle light / dark theme

Quantum computers have the potential to solve complex problems that would be impossible for the most powerful classical supercomputer to crack.

Just like a has separate yet interconnected components that must work together, such as a memory chip and a CPU on a motherboard, a quantum computer will need to communicate between multiple processors.

Current architectures used to interconnect superconducting quantum processors are “point-to-point” in connectivity, meaning they require a series of transfers between , with compounding error rates.

Adopting liquid cooling technology could significantly reduce electricity costs across the data center.

Many Porsche “purists” reflect forlornly upon the 1997, 5th generation, 996 version of the iconic 911 sports car. It was the first year of the water-cooled engine versions of the 911, which had previously been based on air-cooled engines since their entry into the market in 1964. The 911 was also the successor to the popular air-cooled 356. For over three decades, Porsche’s flagship 911 was built around an air-cooled engine. The two main reasons often provided for the shift away from air-cooled to water-cooled engines were 1) environmental (emission standards) and 2) performance (in part cylinder head cooling). The writing was on the wall: If Porsche was going to remain competitive in the sports car market and racing world, the move to water-cooled engines was unavoidable.

Fast forward to current data centers trying to meet the demands for AI computing. For similar reasons, we’re seeing a shift towards liquid cooling. Machines relying on something other than air for cooling date back at least to the Cray-1 supercomputer which used a freon-based system and the Cray-2 which used Fluorinert, a non-conductive liquid in which boards were immersed. The Cray-1 was rated at about 115kW and the Cray-2 at 195kW, both a far cry from the 10’s of MWs used by today’s most powerful supercomputers. Another distinguishing feature here is that these are “supercomputers” and not just data center servers. Data centers have largely run on air-cooled processors, but with the incredible demand for computing created by the explosive increase in AI applications, data centers are being called on to provide supercomputing-like capabilities.

D-Wave Quantum Inc. announced a scientific advance confirming its annealing quantum computer outperformed a powerful classical supercomputer in simulating complex magnetic materials. This achievement is documented in a peer-reviewed paper titled “Beyond-Classical Computation in Quantum Simulation,” published in Science.

The research indicates that D-Wave’s quantum computer completed simulations that would take nearly a million years and exceed the world’s annual electricity consumption if attempted with classical technology. The D-Wave Advantage2 prototype was central to this success.

An international team collaborated to simulate quantum dynamics in programmable spin glasses using both D-Wave’s system and the Frontier supercomputer at Oak Ridge National Laboratory, showcasing the quantum computer’s capability for swift and accurate simulation of various lattice structures and materials properties.

Scientists have tapped into the Summit supercomputer to study an elaborate molecular pathway called nucleotide excision repair (NER). This research reveals how damaged strands of DNA are repaired through this molecular pathway, nucleotide excision repair.

NER’s protein components can change shape to perform different functions of repair on broken strands of DNA.

A team of scientists from Georgia State University built a computer model of a critical NER component called the pre-incision complex (PInC) that plays a key role in regulating DNA repair processes in the latter stages of the NER pathway.

Artificial Intelligence (AI) is, without a doubt, the defining technological breakthrough of our time. It represents not only a quantum leap in our ability to solve complex problems but also a mirror reflecting our ambitions, fears, and ethical dilemmas. As we witness its exponential growth, we cannot ignore the profound impact it is having on society. But are we heading toward a bright future or a dangerous precipice?

This opinion piece aims to foster critical reflection on AI’s role in the modern world and what it means for our collective future.

AI is no longer the stuff of science fiction. It is embedded in nearly every aspect of our lives, from the virtual assistants on our smartphones to the algorithms that recommend what to watch on Netflix or determine our eligibility for a bank loan. In medicine, AI is revolutionizing diagnostics and treatments, enabling the early detection of cancer and the personalization of therapies based on a patient’s genome. In education, adaptive learning platforms are democratizing access to knowledge by tailoring instruction to each student’s pace.

These advancements are undeniably impressive. AI promises a more efficient, safer, and fairer world. But is this promise being fulfilled? Or are we inadvertently creating new forms of inequality, where the benefits of technology are concentrated among a privileged few while others are left behind?

One of AI’s most pressing challenges is its impact on employment. Automation is eliminating jobs across various sectors, including manufacturing, services, and even traditionally “safe” fields such as law and accounting. Meanwhile, workforce reskilling is not keeping pace with technological disruption. The result? A growing divide between those equipped with the skills to thrive in the AI-driven era and those displaced by machines.

Another urgent concern is privacy. AI relies on vast amounts of data, and the massive collection of personal information raises serious questions about who controls these data and how they are used. We live in an era where our habits, preferences, and even emotions are continuously monitored and analyzed. This not only threatens our privacy but also opens the door to subtle forms of manipulation and social control.

Then, there is the issue of algorithmic bias. AI is only as good as the data it is trained on. If these data reflect existing biases, AI can perpetuate and even amplify societal injustices. We have already seen examples of this, such as facial recognition systems that fail to accurately identify individuals from minority groups or hiring algorithms that inadvertently discriminate based on gender. Far from being neutral, AI can become a tool of oppression if not carefully regulated.

Who Decides What Is Right?

AI forces us to confront profound ethical questions. When a self-driving car must choose between hitting a pedestrian or colliding with another vehicle, who decides the “right” choice? When AI is used to determine parole eligibility or distribute social benefits, how do we ensure these decisions are fair and transparent?

The reality is that AI is not just a technical tool—it is also a moral one. The choices we make today about how we develop and deploy AI will shape the future of humanity. But who is making these decisions? Currently, AI’s development is largely in the hands of big tech companies and governments, often without sufficient oversight from civil society. This is concerning because AI has the potential to impact all of us, regardless of our individual consent.

A Utopia or a Dystopia?

The future of AI remains uncertain. On one hand, we have the potential to create a technological utopia, where AI frees us from mundane tasks, enhances productivity, and allows us to focus on what truly matters: creativity, human connection, and collective well-being. On the other hand, there is the risk of a dystopia where AI is used to control, manipulate, and oppress—dividing society between those who control technology and those who are controlled by it.

The key to avoiding this dark scenario lies in regulation and education. We need robust laws that protect privacy, ensure transparency, and prevent AI’s misuse. But we also need to educate the public on the risks and opportunities of AI so they can make informed decisions and demand accountability from those in power.

Artificial Intelligence is, indeed, the Holy Grail of Technology. But unlike the medieval legend, this Grail is not hidden in a distant castle—it is in our hands, here and now. It is up to us to decide how we use it. Will AI be a tool for building a more just and equitable future, or will it become a weapon that exacerbates inequalities and threatens our freedom?

The answer depends on all of us. As citizens, we must demand transparency and accountability from those developing and implementing AI. As a society, we must ensure that the benefits of this technology are shared by all, not just a technocratic elite. And above all, we must remember that technology is not an end in itself but a means to achieve human progress.

The future of AI is the future we choose to build. And at this critical moment in history, we cannot afford to get it wrong. The Holy Grail is within our reach—but its true value will only be realized if we use it for the common good.

__
Copyright © 2025, Henrique Jorge

[ This article was originally published in Portuguese in SAPO’s technology section at: https://tek.sapo.pt/opiniao/artigos/o-santo-graal-da-tecnologia ]

Sunburns and aging skin are obvious effects of exposure to harmful UV rays, tobacco smoke and other carcinogens. But the effects aren’t just skin deep. Inside the body, DNA is literally being torn apart.

Understanding how the body heals and protects itself from DNA damage is vital for treating genetic disorders and life-threatening diseases such as cancer. But despite numerous studies and medical advances, much about the molecular mechanisms of DNA repair remains a mystery.

For the past several years, researchers at Georgia State University have tapped into the Summit supercomputer at the Department of Energy’s Oak Ridge National Laboratory to study an elaborate molecular pathway called (NER). NER relies on an array of highly dynamic protein complexes to cut out (excise) damaged DNA with surgical precision.