Toggle light / dark theme

China Launches World’s Fastest Quantum Computers | China’s Advancement In Quantum Computers #techno

China Launches World’s Fastest Quantum Computers | China’s Advancement In Quantum Computers #technology.

“Techno Jungles”

In 2019, Google announced that its 53-qubit Sycamore processor had finished a task in 3.3 minutes that would have taken a conventional supercomputer at least 2.5 days to accomplish. According to reports, China’s 66-Qubit Zuchongzhi 2 Quantum Processor was able to complete the same task 1 million times faster in October of last year. Together with the Shanghai Institute of Technical Physics and the Shanghai Institute of Microsystem and Information Technology, a group of researchers from the Chinese Academy of Sciences Center for Excellence in Quantum Information and Quantum Physics were responsible for the development of that processor.

According to NDTV, the Chinese government under Xi Jinping has spent $10 billion on the country’s National Laboratory for Quantum Information Sciences. This demonstrates China’s significant commitment to the field of quantum computing. According to Live Science, the nation is also a world leader in the field of quantum networking, which involves the transmission of data that has been encoded through the use of quantum mechanics over great distances.

Classical computers cannot compete with the capabilities of quantum computers when it comes to certain tasks due to the peculiar mathematics that governs the quantum world. Quantum computers perform calculations using qubits, which can simultaneously exist in many states, in contrast to classical computers, which perform calculations using bits, which can only have one of two states (typically represented by a 1 or a 0). Because of this, quantum computers solve problems significantly faster than traditional computers. But despite the existence of theories that have been around for decades and predict that quantum computing will outperform classical computing, the construction of practical quantum computers has proven to be a great deal more difficult.

If you enjoyed considering please like and subscribe to it, it helps grow our channel.

Her work helped her boss win the Nobel Prize. Now the spotlight is on her

Scientists have long studied the work of Subrahmanyan Chandrasekhar, the Indian-born American astrophysicist who won the Nobel Prize in 1983, but few know that his research on stellar and planetary dynamics owes a deep debt of gratitude to an almost forgotten woman: Donna DeEtte Elbert.

From 1948 to 1979, Elbert worked as a “computer” for Chandrasekhar, tirelessly devising and solving mathematical equations by hand. Though she shared authorship with the Nobel laureate on 18 papers and Chandrasekhar enthusiastically acknowledged her seminal contributions, her greatest achievement went unrecognized until a postdoctoral scholar at UCLA connected threads in Chandrasekhar’s work that all led back to Elbert.

Elbert’s achievement? Before anyone else, she predicted the conditions argued to be optimal for a planet or star to generate its own magnetic field, said the scholar, Susanne Horn, who has spent half a decade building on Elbert’s work.

Advancing AI trustworthiness: Updates on responsible AI research

Inflated expectations around the capabilities of AI technologies may lead people to believe that computers can’t be wrong. The truth is AI failures are not a matter of if but when. AI is a human endeavor that combines information about people and the physical world into mathematical constructs. Such technologies typically rely on statistical methods, with the possibility for errors throughout an AI system’s lifespan. As AI systems become more widely used across domains, especially in high-stakes scenarios where people’s safety and wellbeing can be affected, a critical question must be addressed: how trustworthy are AI systems, and how much and when should people trust AI?

To infinity and some glimpses of beyond

Certain physical problems such as the rupture of a thin sheet can be difficult to solve as computations breakdown at the point of rupture. Here the authors propose a regularization approach to overcome this breakdown which could help dealing with mathematical models that have finite time singularities.

Understanding how a cell becomes a person, with math

We all start from a single cell, the fertilized egg. From this cell, through a process involving cell division, cell differentiation and cell death a human being takes shape, ultimately made up of over 37 trillion cells across hundreds or thousands of different cell types.

While we broadly understand many aspects of this developmental process, we do not know many of the details.

A better understanding of how a fertilized egg turns into trillions of cells to form a human is primarily a mathematical challenge. What we need are mathematical models that can predict and show what happens.

New method for comparing neural networks exposes how artificial intelligence works

A team at Los Alamos National Laboratory has developed a novel approach for comparing neural networks that looks within the “black box” of artificial intelligence to help researchers understand neural network behavior. Neural networks recognize patterns in datasets; they are used everywhere in society, in applications such as virtual assistants, facial recognition systems and self-driving cars.

“The research community doesn’t necessarily have a complete understanding of what neural networks are doing; they give us good results, but we don’t know how or why,” said Haydn Jones, a researcher in the Advanced Research in Cyber Systems group at Los Alamos. “Our new method does a better job of comparing neural networks, which is a crucial step toward better understanding the mathematics behind AI.”

Jones is the lead author of the paper “If You’ve Trained One You’ve Trained Them All: Inter-Architecture Similarity Increases With Robustness,” which was presented recently at the Conference on Uncertainty in Artificial Intelligence. In addition to studying network similarity, the paper is a crucial step toward characterizing the behavior of robust neural networks.

Voxengo plugin developer says he’s broken into “some ‘backdoor’ in mathematics itself” that proves that the universe has a ‘creator’

Vaneev posits that: “‘intelligent impulses’ or even ‘human mind’ itself (because a musician can understand these impulses) existed long before the ‘Big Bang’ happened. This discovery is probably both the greatest discovery in the history of mankind, and the worst discovery (for many) as it poses very unnerving questions that touch religious grounds.”

The Voxengo developer sums up his findings as follows: “These results of 1-bit PRVHASH say the following: if abstract mathematics contains not just a system of rules for manipulating numbers, but also a freely-defined fixed information that is also ‘readable’ by a person, then mathematics does not just ‘exist’, but ‘it was formed’, because mathematics does not evolve (beside human discovery of new rules and patterns). And since physics cannot be formulated without such mathematics, and physical processes clearly obey these mathematical rules, it means that a Creator/Higher Intelligence/God exists in relation to the Universe. For the author personally, everything is proven here.”

Vaneev says that he wanted to “share my astonishment and satisfaction with the results of this work that took much more of my time than I had wished for,” but that you don’t need to concern yourself too much with his findings if you don’t want to.”

Particle physics on the brain

face_with_colon_three circa 2018.


Understanding the fundamental constituents of the universe is tough. Making sense of the brain is another challenge entirely. Each cubic millimetre of human brain contains around 4 km of neuronal “wires” carrying millivolt-level signals, connecting innumerable cells that define everything we are and do. The ancient Egyptians already knew that different parts of the brain govern different physical functions, and a couple of centuries have passed since physicians entertained crowds by passing currents through corpses to make them seem alive. But only in recent decades have neuroscientists been able to delve deep into the brain’s circuitry.

On 25 January, speaking to a packed audience in CERN’s Theory department, Vijay Balasubramanian of the University of Pennsylvania described a physicist’s approach to solving the brain. Balasubramanian did his PhD in theoretical particle physics at Princeton University and also worked on the UA1 experiment at CERN’s Super Proton Synchrotron in the 1980s. Today, his research ranges from string theory to theoretical biophysics, where he applies methodologies common in physics to model the neural topography of information processing in the brain.

“We are using, as far as we can, hard mathematics to make real, quantitative, testable predictions, which is unusual in biology.” — Vijay Balasubramanian