Toggle light / dark theme

Tiny particles are interconnected despite sometimes being thousands of kilometers apart—Albert Einstein called this “spooky action at a distance.” Something that would be inexplicable by the laws of classical physics is a fundamental part of quantum physics. Entanglement like this can occur between multiple quantum particles, meaning that certain properties of the particles are intimately linked with each other.

Entangled systems containing multiple offer significant benefits in implementing quantum algorithms, which have the potential to be used in communications, or quantum computing. Researchers from Paderborn University have been working with colleagues from Ulm University to develop the first programmable optical quantum memory. The study was published as an “Editor’s suggestion” in the Physical Review Letters journal.

Algorithms have helped mathematicians perform fundamental operations for thousands of years. The ancient Egyptians created an algorithm to multiply two numbers without requiring a multiplication table, and Greek mathematician Euclid described an algorithm to compute the greatest common divisor, which is still in use today.

During the Islamic Golden Age, Persian mathematician Muhammad ibn Musa al-Khwarizmi designed new algorithms to solve linear and quadratic equations. In fact, al-Khwarizmi’s name, translated into Latin as Algoritmi, led to the term algorithm. But, despite the familiarity with algorithms today – used throughout society from classroom algebra to cutting edge scientific research – the process of discovering new algorithms is incredibly difficult, and an example of the amazing reasoning abilities of the human mind.

In our paper, published today in Nature, we introduce AlphaTensor, the first artificial intelligence (AI) system for discovering novel, efficient, and provably correct algorithms for fundamental tasks such as matrix multiplication. This sheds light on a 50-year-old open question in mathematics about finding the fastest way to multiply two matrices.

The rise of quantum computing and its implications for current encryption standards are well known. But why exactly should quantum computers be especially adept at breaking encryption? The answer is a nifty bit of mathematical juggling called Shor’s algorithm. The question that still leaves is: What is it that this algorithm does that causes quantum computers to be so much better at cracking encryption? In this video, YouTuber minutephysics explains it in his traditional whiteboard cartoon style.

“Quantum computation has the potential to make it super, super easy to access encrypted data — like having a lightsaber you can use to cut through any lock or barrier, no matter how strong,” minutephysics says. “Shor’s algorithm is that lightsaber.”

According to the video, Shor’s algorithm works off the understanding that for any pair of numbers, eventually multiplying one of them by itself will reach a factor of the other number plus or minus 1. Thus you take a guess at the first number and factor it out, adding and subtracting 1, until you arrive at the second number. That would unlock the encryption (specifically RSA here, but it works on some other types) because we would then have both factors.

Millions of people could suddenly lose electricity if a ransomware attack just slightly tweaked energy flow onto the U.S. power grid.

No single power utility company has enough resources to protect the entire grid, but maybe all 3,000 of the grid’s utilities could fill in the most crucial gaps if there were a map showing where to prioritize their security investments.

Purdue University researchers have developed an to create that map. Using this tool, regulatory authorities or cyber insurance companies could establish a framework that guides the security investments of power utility companies to parts of the grid at greatest risk of causing a blackout if hacked.

A fluid dynamics theory that violates causality would always generate paradoxical instabilities—a result that could guide the search for a theory for relativistic fluids.

The theory of fluid dynamics has been successful in many areas of fundamental and applied sciences, describing fluids from dilute gases, such as air, to liquids, such as water. For most nonrelativistic fluids, the theory takes the form of the celebrated Navier-Stokes equation. However, fundamental problems arise when extending these equations to relativistic fluids. Such extensions typically imply paradoxes—for instance, thermodynamic states of the systems can appear stable or unstable to observers in different frames of reference. These problems hinder the description of the dynamics of important fluid systems, such as neutron-rich matter in neutron star mergers or the quark-gluon plasma produced in heavy-ion collisions.

Scientists trained a machine learning tool to capture the physics of electrons moving on a lattice using far fewer equations than would typically be required, all without sacrificing accuracy. A daunting quantum problem that until now required 100,000 equations has been compressed into a bite-size task of as few as four equations by physicists using artificial intelligence. All of this was accomplished without sacrificing accuracy. The work could revolutionize how scientists investigate systems containing many interacting electrons. Furthermore, if scalable to other problems, the approach could potentially aid in the design of materials with extremely valuable properties such as superconductivity or utility for clean energy generation.

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

For decades, enterprises have jury-rigged software designed for structured data when trying to solve unstructured, text-based data problems. Although these solutions performed poorly, there was nothing else. Recently, though, machine learning (ML) has improved significantly at understanding natural language.

Unsurprisingly, Silicon Valley is in a mad dash to build market-leading offerings for this new opportunity. Khosla Ventures thinks natural language processing (NLP) is the most important technology trend of the next five years. If the 2000s were about becoming a big data-enabled enterprise, and the 2010s were about becoming a data science-enabled enterprise — then the 2020s are about becoming a natural language-enabled enterprise.

The past may be a fixed and immutable point, but with the help of machine learning, the future can at times be more easily divined.

Using a new type of machine learning method called next generation reservoir computing, researchers at The Ohio State University have recently found a new way to predict the behavior of spatiotemporal chaotic systems—such as changes in Earth’s weather—that are particularly complex for scientists to forecast.

The study, published today in the journal Chaos: An Interdisciplinary Journal of Nonlinear Science, utilizes a new and highly that, when combined with next generation reservoir computing, can learn spatiotemporal chaotic systems in a fraction of the time of other machine learning algorithms.