Toggle light / dark theme

In a new study from Skoltech and the University of Kentucky, researchers found a new connection between quantum information and quantum field theory. This work attests to the growing role of quantum information theory across various areas of physics. The paper was published in the journal Physical Review Letters.

Quantum information plays an increasingly important role as an organizing principle connecting various branches of physics. In particular, the theory of quantum error correction, which describes how to protect and recover information in quantum computers and other complex interacting systems, has become one of the building blocks of the modern understanding of quantum gravity.

“Normally, information stored in physical systems is localized. Say, a computer file occupies a particular small area of the hard drive. By “error” we mean any unforeseen or undesired interaction which scrambles information over an extended area. In our example, pieces of the computer file would be scattered over different areas of the hard drive. Error correcting codes are mathematical protocols that allow collecting these pieces together to recover the original information. They are in heavy use in data storage and communication systems. Quantum error correcting codes play a similar role in cases when the quantum nature of the physical system is important,” Anatoly Dymarsky, Associate Professor at the Skoltech Center for Energy Science and Technology (CEST), explains.

“Our mathematical equation lets us predict which individuals will have both more happiness and more brain activity for intrinsic compared to extrinsic rewards. The same approach can be used in principle to measure what people actually prefer without asking them explicitly, but simply by measuring their mood.”


Summary: A new mathematical equation predicts which individuals will have more happiness and increased brain activity for intrinsic rather than extrinsic rewards. The approach can be used to predict personal preferences based on mood and without asking the individual.

Source: UCL

A new study led by researchers at the Wellcome Centre for Human Neuroimaging shows that using mathematical equations with continuous mood sampling may be better at assessing what people prefer over asking them directly.

Circa 2012


Faraday and Dirac constructed magnetic monopoles using the practical and mathematical tools available to them. Now physicists have engineered effective monopoles by combining modern optics with nanotechnology. Part matter and part light, these magnetic monopoles travel at unprecedented speeds.

In classical physics (as every student should know) there are no sources or sinks of magnetic field, and hence no magnetic monopoles. Even so, a tight bundle of magnetic flux — such as that created by a long string of magnetic dipoles — has an apparent source or sink at its end. If we map the lines of force with a plotting compass, we think we see a magnetic monopole as our compass cannot enter the region of dense flux. In 1,821 Michael Faraday constructed an effective monopole of this sort by floating a long thin bar magnet upright in a bowl of mercury, with the lower end tethered and the upper end free to move like a monopole in the horizontal plane.

Reservoir computing, a machine learning algorithm that mimics the workings of the human brain, is revolutionizing how scientists tackle the most complex data processing challenges, and now, researchers have discovered a new technique that can make it up to a million times faster on specific tasks while using far fewer computing resources with less data input.

With the next-generation technique, the researchers were able to solve a complex computing problem in less than a second on a desktop computer — and these overly complex problems, such as forecasting the evolution of dynamic systems like weather that change over time, are exactly why reservoir computing was developed in the early 2000s.

These systems can be extremely difficult to predict, with the “butterfly effect” being a well-known example. The concept, which is closely associated with the work of mathematician and meteorologist Edward Lorenz, essentially describes how a butterfly fluttering its wings can influence the weather weeks later. Reservoir computing is well-suited for learning such dynamic systems and can provide accurate projections of how they will behave in the future; however, the larger and more complex the system, more computing resources, a network of artificial neurons, and more time are required to obtain accurate forecasts.

Reservoir computing is already one of the most advanced and most powerful types of artificial intelligence that scientists have at their disposal – and now a new study outlines how to make it up to a million times faster on certain tasks.

That’s an exciting development when it comes to tackling the most complex computational challenges, from predicting the way the weather is going to turn, to modeling the flow of fluids through a particular space.

Such problems are what this type of resource-intensive computing was developed to take on; now, the latest innovations are going to make it even more useful. The team behind this new study is calling it the next generation of reservoir computing.

Black holes are getting weirder by the day. When scientists first confirmed the behemoths existed back in the 1970s, we thought they were pretty simple, inert corpses. Then, famed physicist Stephen Hawking discovered that black holes aren’t exactly black and they actually emit heat. And now, a pair of physicists has realized that the sort-of-dark objects also exert a pressure on their surroundings.

The finding that such simple, non-rotating “black holes have a pressure as well as a temperature is even more exciting given that it was a total surprise,” co-author Xavier Calmet, a professor of physics at the University of Sussex in England, said in a statement.

I predicted that by 2030 you would be able to tell an AI assistant to build brand new books, movies, TV, video games, etc… on demand. That has now arrived, although in its Very Early stages. Look forward to building whatever media you want, or changing existing media into whatever you want.

“OpenAI Codex: Just Say What You Want!”


❤️ Check out Perceptilabs and sign up for a free demo here: https://www.perceptilabs.com/papers.

Cumrun Vafa is a theoretical physicist at Harvard. Please support this podcast by checking out our sponsors:
- Headspace: https://headspace.com/lex to get free 1 month trial.
- The Jordan Harbinger Show: https://www.youtube.com/thejordanharbingershow.
- Squarespace: https://lexfridman.com/squarespace and use code LEX to get 10% off.
- Allform: https://allform.com/lex to get 20% off.

CORRECTIONS:
- I’m currently hiring folks to help me with editing and image overlays so there may be some errors in overlays (as in this episode) as we build up a team. I ask for your patience.
- At 1 hour 27 minute mark, we overlay an image of Brian Greene. We meant to overlay an image of Michael Green, an early pioneer of string theory: https://bit.ly/michael-green-physicist.
- The image overlay of the heliocentric model is incorrect.

EPISODE LINKS:
Cumrun’s Twitter: https://twitter.com/cumrunv.
Cumrun’s Website: https://www.cumrunvafa.org.
Puzzles to Unravel the Universe (book): https://amzn.to/3BFk5ms.

PODCAST INFO:

CERN Courier


Jennifer Ngadiuba and Maurizio Pierini describe how ‘unsupervised’ machine learning could keep watch for signs of new physics at the LHC that have not yet been dreamt up by physicists.

In the 1970s, the robust mathematical framework of the Standard Model ℠ replaced data observation as the dominant starting point for scientific inquiry in particle physics. Decades-long physics programmes were put together based on its predictions. Physicists built complex and highly successful experiments at particle colliders, culminating in the discovery of the Higgs boson at the LHC in 2012.

Along this journey, particle physicists adapted their methods to deal with ever growing data volumes and rates. To handle the large amount of data generated in collisions, they had to optimise real-time selection algorithms, or triggers. The field became an early adopter of artificial intelligence (AI) techniques, especially those falling under the umbrella of “supervised” machine learning. Verifying the SM’s predictions or exposing its shortcomings became the main goal of particle physics. But with the SM now apparently complete, and supervised studies incrementally excluding favoured models of new physics, “unsupervised” learning has the potential to lead the field into the uncharted waters beyond the SM.