If you know the atoms that compose a particular molecule or solid material, the interactions between those atoms can be determined computationally, by solving quantum mechanical equations—at least, if the molecule is small and simple. However, solving these equations, critical for fields from materials engineering to drug design, requires a prohibitively long computational time for complex molecules and materials.
Now, researchers at the U.S. Department of Energy’s (DOE) Argonne National Laboratory and the University of Chicago’s Pritzker School of Molecular Engineering (PME) and Department of Chemistry have explored the possibility of solving these electronic structures using a quantum computer.
The research, which uses a combination of new computational approaches, was published online in the Journal of Chemical Theory and Computation. It was supported by Q-NEXT, a DOE National Quantum Information Science Research Center led by Argonne, and by the Midwest Integrated Center for Computational Materials (MICCoM).
A project led by a group of researchers from Israel’s Bar-Ilan University, in collaboration with TII—the Quantum Research Center in Abu Dhabi, United Arab Emirates, is advancing quantum computing by improving the performance of superconducting qubits, the basic computation units of a superconducting quantum processor. The improved qubit, called a tunable superconducting flux qubit, is a micron-sized superconducting loop where electrical current can flow clockwise or counterclockwise, or in a quantum superposition of both directions.
These quantum features would allow the computer to be much faster and more powerful than a normal computer. For the speed potential to be realized, the quantum computer needs to operate several hundred of qubits simultaneously without having them unintentionally interfering with each other.
As an alternative technology to that existing today in quantum processors, superconducting flux qubits provide several important advantages: First, they are very fast and reliable; and second, it may be simpler to integrate many flux qubits into a processor compared to current available technology.
In the realm of computing, information is usually perceived as being represented by a binary system of ones and zeros. However, in our everyday lives, we use a decimal system consisting of ten digits to represent numbers. For instance, the number 9 in binary is represented as 1,001, requiring four digits instead of just one in the decimal system.
Today’s quantum computers have emerged from the binary system, but the physical systems that encode their quantum bits (qubits) have the capability to encode quantum digits (qudits) as well. This was recently demonstrated by a team headed by Martin Ringbauer at the University of Innsbruck’s Department of Experimental Physics. According to experimental physicist Pavel Hrmo at ETH Zurich: “The challenge for qudit-based quantum computers has been to efficiently create entanglement between the high-dimensional information carriers.”
In a study published on April 19, 2023, in the journal Nature Communications.
Using the James Webb Space Telescope (JWST) and the Hubble Space Telescope (HST), astronomers from the University of Padua, Italy, and elsewhere have observed a metal-poor globular cluster known as Messier 92. The observations deliver crucial information regarding multiple stellar populations in this cluster. Results were published April 12 on the arXiv pre-print server.
Located some 26,700 light years away in the constellation of Hercules, Messier 92 (or M92 for short) is a GC with a metallicity of just-2.31 and a mass of about 200,000 solar masses. The cluster, estimated to be 11.5 billion years old, is known to host at least two stellar generations of stars—named 1G and 2G. Previous studies have found that Messier 92 has an extended 1G sequence, which hosts about 30.4% of cluster stars, and two distinct groups of 2G stars (2GA and 2GB).
A team of physicists has illuminated certain properties of quantum systems by observing how their fluctuations spread over time. The research offers an intricate understanding of a complex phenomenon that is foundational to quantum computing—a method that can perform certain calculations significantly more efficiently than conventional computing.
“In an era of quantum computing it’s vital to generate a precise characterization of the systems we are building,” explains Dries Sels, an assistant professor in New York University’s Department of Physics and an author of the paper, which is published in the journal Nature Physics. “This work reconstructs the full state of a quantum liquid, consistent with the predictions of a quantum field theory—similar to those that describe the fundamental particles in our universe.”
Sels adds that the breakthrough offers promise for technological advancement.
Last 2020, scientists were able to pick up distinct brain signals that had never been observed before. Such findings hint at how the brain is a more powerful computational device than previously thought.
Distinct Brain Signals
According to Science Alert, back then, researchers from German and Greek institutes were able to report a brain mechanism in the outer cortical cells. They reported their discoveries in the Science journal.
Moiré patterns occur everywhere. They are created by layering two similar but not identical geometric designs. A common example is the pattern that sometimes emerges when viewing a chain-link fence through a second chain-link fence.
For more than 10 years, scientists have been experimenting with the moiré pattern that emerges when a sheet of graphene is placed between two sheets of boron nitride. The resulting moiré pattern has shown tantalizing effects that could vastly improve semiconductor chips that are used to power everything from computers to cars.
A new study led by University at Buffalo researchers, and published in Nature Communications, demonstrated that graphene can live up to its promise in this context.
If I have a visual experience that I describe as a red tomato a meter away, then I am inclined to believe that there is, in fact, a red tomato a meter away, even if I close my eyes. I believe that my perceptions are, in the normal case, veridical—that they accurately depict aspects of the real world. But is my belief supported by our best science? In particular: Does evolution by natural selection favor veridical perceptions? Many scientists and philosophers claim that it does. But this claim, though plausible, has not been properly tested. In this talk, I present a new theorem: Veridical perceptions are never more fit than non-veridical perceptions which are simply tuned to the relevant fitness functions. This entails that perception is not a window on reality; it is more like a desktop interface on your laptop. I discuss this interface theory of perception and its implications for one of the most puzzling unsolved problems in science: the relationship between brain activity and conscious experiences.
Prof. Donald Hoffman, PhD received his PhD from MIT, and joined the faculty of the University of California, Irvine in 1983, where he is a Professor Emeritus of Cognitive Sciences. He is an author of over 100 scientific papers and three books, including Visual Intelligence, and The Case Against Reality. He received a Distinguished Scientific Award from the American Psychological Association for early career research, the Rustum Roy Award of the Chopra Foundation, and the Troland Research Award of the US National Academy of Sciences. His writing has appeared in Edge, New Scientist, LA Review of Books, and Scientific American and his work has been featured in Wired, Quanta, The Atlantic, and Through the Wormhole with Morgan Freeman. You can watch his TED Talk titled “Do we see reality as it is?” and you can follow him on Twitter @donalddhoffman.
Hidden Markov model (HMM) [ 1, 2 ] is a powerful model to describe sequential data and has been widely used in speech signal processing [ 3-5 ], computer vision [ 6-8 ], longitudinal data analysis [ 9 ], social networks [ 10-12 ] and so on. An HMM typically assumes the system has K internal states, and the transition of states forms a Markov chain. The system state cannot be observed directly, thus we need to infer the hidden states and system parameters based on observations. Due to the existence of latent variables, the Expectation-Maximisation (EM) algorithm [ 13, 14 ] is often used to learn an HMM. The main difficulty is to calculate site marginal distributions and pairwise marginal distributions based on the posterior distribution of latent variables. The forward-backward algorithm was specifically designed to tackle this problem. The derivation of the forward-backward algorithm heavily relies on HMM assumptions and probabilistic relationships between quantities, thus requiring the parameters in the posterior distribution to have explicit probabilistic meanings.
Bayesian HMM [ 15-22 ] further imposes priors on the parameters of HMM, and the resulting model is more robust. It has been demonstrated that Bayesian HMM often outperforms HMM in applications. However, the learning process of a Bayesian HMM is more challenging since the posterior distribution of latent variables is intractable. Mean-field theory-based variational inference is often utilised in the E-step of the EM algorithm, which tries to find an optimal approximation of the posterior distribution in a factorised family. The variational inference iteration also involves computing site marginal distributions and pairwise marginal distributions given the joint distribution of system state indicator variables. Existing works [ 15-23 ] directly apply the forward-backward algorithm to obtain these values without justification. This is not theoretically sound and the result is not guaranteed to be correct, since the requirements of the forward-backward algorithm are not met in this case.
In this paper, we prove that the forward-backward algorithm can be applied in more general cases where the parameters have no probabilistic meanings. The first proof converts the general case to an HMM and uses the correctness of the forward-backward algorithm on HMM to prove the claim. The second proof is model-free, which derives the forward-backward algorithm in a totally different way. The new derivation does not rely on HMM assumptions and merely utilises matrix techniques to rewrite the desired quantities. Therefore, this derivation naturally proves that it is unnecessary to make probabilistic requirements on the parameters of the forward-backward algorithm. Specifically, we justify that heuristically applying the forward-backward algorithm in the variational learning of Bayesian HMM is theoretically sound and guaranteed to return the correct result.