Toggle light / dark theme

Exploring the decay processes of a quantum state weakly coupled to a finite-size reservoir

In quantum physics, Fermi’s golden rule, also known as the golden rule of time-dependent perturbation theory, is a formula that can be used to calculate the rate at which an initial quantum state transitions into a final state, which is composed of a continuum of states (a so-called “bath”). This valuable equation has been applied to numerous physics problems, particularly those for which it is important to consider how systems respond to imposed perturbations and settle into stationary states over time.

Fermi’s golden rule specifically applies to instances in which an initial is weakly coupled to a continuum of other final states, which overlap its energy. Researchers at the Centro Brasileiro de Pesquisas Físicas, Princeton University, and Universität zu Köln have recently set out to investigate what happens when a quantum state is instead coupled to a set of discrete final states with a nonzero mean level spacing, as observed in recent many-body physics studies.

“The decay of a quantum state into some continuum of final states (i.e., a ‘bath’) is commonly associated with incoherent decay processes, as described by Fermi’s golden rule,” Tobias Micklitz, one of the researchers who carried out the study, told Phys.org. “A standard example for this is an excited atom emitting a photon into an infinite vacuum. Current date experimentations, on the other hand, routinely realize composite systems involving quantum states coupled to effectively finite size reservoirs that are composed of discrete sets of final states, rather than a continuum.”

A first step towards quantum algorithms: Minimizing the guesswork of a quantum ensemble

Given the rapid pace at which technology is developing, it comes as no surprise that quantum technologies will become commonplace within decades. A big part of ushering in this new age of quantum computing requires a new understanding of both classical and quantum information and how the two can be related to each other.

Before one can send classical information across quantum channels, it needs to be encoded first. This encoding is done by means of quantum ensembles. A quantum ensemble refers to a set of quantum states, each with its own probability. To accurately receive the transmitted information, the receiver has to repeatedly ‘guess’ the state of the information being sent. This constitutes a cost function that is called ‘guesswork.’ Guesswork refers to the average number of guesses required to correctly guess the state.

The concept of guesswork has been studied at length in classical ensembles, but the subject is still new for quantum ensembles. Recently, a research team from Japan—consisting of Prof. Takeshi Koshiba of Waseda University, Michele Dall’Arno from Waseda University and Kyoto University, and Prof. Francesco Buscemi from Nagoya University—has derived analytical solutions to the guesswork problem subject to a finite set of conditions. “The guesswork problem is fundamental in many scientific areas in which machine learning techniques or artificial intelligence are used. Our results trailblaze an algorithmic aspect of the guesswork problem,” says Koshiba. Their findings are published in IEEE Transactions on Information Theory.

The Many-Worlds Theory, Explained

Quantum physics is strange. At least, it is strange to us, because the rules of the quantum world, which govern the way the world works at the level of atoms and subatomic particles (the behavior of light and matter, as the renowned physicist Richard Feynman put it), are not the rules that we are familiar with — the rules of what we call “common sense.”

The quantum rules, which were mostly established by the end of the 1920s, seem to be telling us that a cat can be both alive and dead at the same time, while a particle can be in two places at once. But to the great distress of many physicists, let alone ordinary mortals, nobody (then or since) has been able to come up with a common-sense explanation of what is going on. More thoughtful physicists have sought solace in other ways, to be sure, namely coming up with a variety of more or less desperate remedies to “explain” what is going on in the quantum world.

These remedies, the quanta of solace, are called “interpretations.” At the level of the equations, none of these interpretations is better than any other, although the interpreters and their followers will each tell you that their own favored interpretation is the one true faith, and all those who follow other faiths are heretics. On the other hand, none of the interpretations is worse than any of the others, mathematically speaking. Most probably, this means that we are missing something. One day, a glorious new description of the world may be discovered that makes all the same predictions as present-day quantum theory, but also makes sense. Well, at least we can hope.

New tool allows scientists to peer inside neutron stars

Imagine taking a star twice the mass of the sun and crushing it to the size of Manhattan. The result would be a neutron star—one of the densest objects found anywhere in the universe, exceeding the density of any material found naturally on Earth by a factor of tens of trillions. Neutron stars are extraordinary astrophysical objects in their own right, but their extreme densities might also allow them to function as laboratories for studying fundamental questions of nuclear physics, under conditions that could never be reproduced on Earth.

Because of these exotic conditions, scientists still do not understand what exactly themselves are made from, their so-called “equation of state” (EoS). Determining this is a major goal of modern astrophysics research. A new piece of the puzzle, constraining the range of possibilities, has been discovered by a pair of scholars at IAS: Carolyn Raithel, John N. Bahcall Fellow in the School of Natural Sciences; and Elias Most, Member in the School and John A. Wheeler Fellow at Princeton University. Their work was recently published in The Astrophysical Journal Letters.

Ideally, scientists would like to peek inside these exotic objects, but they are too small and distant to be imaged with standard telescopes. Scientists rely instead on indirect properties that they can measure—like the mass and radius of a neutron star—to calculate the EoS, the same way that one might use the length of two sides of a right-angled triangle to work out its hypotenuse. However, the radius of a neutron star is very difficult to measure precisely. One promising alternative for future observations is to instead use a quantity called the “peak spectral frequency” (or f2) in its place.

Stable Diffusion VR is a startling vision of the future of gaming

A while ago I spotted someone working on real time AI image generation in VR and I had to bring it to your attention because frankly, I cannot express how majestic it is to watch AI-modulated AR shifting the world before us into glorious, emergent dreamscapes.

Applying AI to augmented or virtual reality isn’t a novel concept, but there have been certain limitations in applying it—computing power being one of the major barriers to its practical usage. Stable Diffusion image generation software, however, is a boiled-down algorithm for use on consumer-level hardware and has been released on a Creative ML OpenRAIL-M licence. That means not only can developers use the tech to create and launch programs without renting huge amounts of server silicon, but they’re also free to profit from their creations.

Neuroscientist leads unprecedented research to map billions of brain cells

Circa 2018 face_with_colon_three


Since the time of Hippocrates and Herophilus, scientists have placed the location of the mind, emotions and intelligence in the brain. For centuries, this theory was explored through anatomical dissection, as the early neuroscientists named and proposed functions for the various sections of this unusual organ. It wasn’t until the late 19th century that Camillo Golgi and Santiago Ramón y Cajal developed the methods to look deeper into the brain, using a silver stain to detect the long, stringy cells now known as neurons and their connections, called synapses.

Today, neuroanatomy involves the most powerful microscopes and computers on the planet. Viewing synapses, which are only nanometers in length, requires an electron microscope imaging a slice of brain thousands of times thinner than a sheet of paper. To map an entire human brain would require 300,000 of these images, and even reconstructing a small three-dimensional brain region from these snapshots requires roughly the same supercomputing power it takes to run an astronomy simulation of the universe.

Fortunately, both of these resources exist at Argonne, where, in 2015, Kasthuri was the first neuroscientist ever hired by the U.S. Department of Energy laboratory. Peter Littlewood, the former director of Argonne who brought him in, recognized that connectome research was going to be one of the great big data challenges of the coming decades, one that UChicago and Argonne were perfectly poised to tackle.

This Exoskeleton Uses Machine Learning to Put a Personalized Spring in Your Step

“This exoskeleton personalizes assistance as people walk normally through the real world,” said Steve Collins, associate professor of mechanical engineering who leads the Stanford Biomechatronics Laboratory, in a press release. “And it resulted in exceptional improvements in walking speed and energy economy.”

The personalization is enabled by a machine learning algorithm, which the team trained using emulators—that is, machines that collected data on motion and energy expenditure from volunteers who were hooked up to them. The volunteers walked at varying speeds under imagined scenarios, like trying to catch a bus or taking a stroll through a park.

The algorithm drew connections between these scenarios and peoples’ energy expenditure, applying the connections to learn in real time how to help wearers walk in a way that’s actually useful to them. When a new person puts on the boot, the algorithm tests a different pattern of assistance each time they walk, measuring how their movements change in response. There’s a short learning curve, but on average the algorithm was able to effectively tailor itself to new users in just an hour.

A molecular multi-qubit model system for quantum computing

Molecules could make useful systems for quantum computers, but they must contain individually addressable, interacting quantum bit centers. In the journal Angewandte Chemie, a team of researchers has now presented a molecular model with three different coupled qubit centers. As each center is spectroscopically addressable, quantum information processing (QIP) algorithms could be developed for this molecular multi-qubit system for the first time, the team says.

Computers compute using bits, while quantum computers use quantum bits (or qubits for short). While a conventional bit can only represent 0 or 1, a qubit can store two states at the same time. These superimposed states mean that a quantum computer can carry out parallel calculations, and if it uses a number of qubits, it has the potential to be much faster than a standard computer.

However, in order for the quantum computer to perform these calculations, it must be able to evaluate and manipulate the multi-qubit information. The research teams of Alice Bowen and Richard Winpenny, University of Manchester, UK, and their colleagues have now produced a molecular model system with several separate qubit units, which can be spectroscopically detected and the states of which can be switched by interacting with one another.

DeepMind breaks 50-year math record using AI; new record falls a week later

Matrix multiplication is at the heart of many machine learning breakthroughs, and it just got faster—twice. Last week, DeepMind announced it discovered a more efficient way to perform matrix multiplication, conquering a 50-year-old record. This week, two Austrian researchers at Johannes Kepler University Linz claim they have bested that new record by one step.

In 1969, a German mathematician named Volker Strassen discovered the previous-best algorithm for multiplying 4×4 matrices, which reduces the number of steps necessary to perform a matrix calculation. For example, multiplying two 4×4 matrices together using a traditional schoolroom method would take 64 multiplications, while Strassen’s algorithm can perform the same feat in 49 multiplications.

New AI Algorithms Predict Sports Teams’ Moves With 80% Accuracy

Accuracy. Now the Cornell Laboratory for Intelligent Systems and Controls, which developed the algorithms, is collaborating with the Big Red hockey team to expand the research project’s applications.

Representing Cornell University, the Big Red men’s ice hockey team is a National Collegiate Athletic Association Division I college ice hockey program. Cornell Big Red competes in the ECAC Hockey conference and plays its home games at Lynah Rink in Ithaca, New York.

/* */