Toggle light / dark theme

How Quantum Physics Leads to Decrypting Common Algorithms

The rise of quantum computing and its implications for current encryption standards are well known. But why exactly should quantum computers be especially adept at breaking encryption? The answer is a nifty bit of mathematical juggling called Shor’s algorithm. The question that still leaves is: What is it that this algorithm does that causes quantum computers to be so much better at cracking encryption? In this video, YouTuber minutephysics explains it in his traditional whiteboard cartoon style.

“Quantum computation has the potential to make it super, super easy to access encrypted data — like having a lightsaber you can use to cut through any lock or barrier, no matter how strong,” minutephysics says. “Shor’s algorithm is that lightsaber.”

According to the video, Shor’s algorithm works off the understanding that for any pair of numbers, eventually multiplying one of them by itself will reach a factor of the other number plus or minus 1. Thus you take a guess at the first number and factor it out, adding and subtracting 1, until you arrive at the second number. That would unlock the encryption (specifically RSA here, but it works on some other types) because we would then have both factors.

As ransomware attacks increase, new algorithm may help prevent power blackouts

Millions of people could suddenly lose electricity if a ransomware attack just slightly tweaked energy flow onto the U.S. power grid.

No single power utility company has enough resources to protect the entire grid, but maybe all 3,000 of the grid’s utilities could fill in the most crucial gaps if there were a map showing where to prioritize their security investments.

Purdue University researchers have developed an to create that map. Using this tool, regulatory authorities or cyber insurance companies could establish a framework that guides the security investments of power utility companies to parts of the grid at greatest risk of causing a blackout if hacked.

Seeking Stability in a Relativistic Fluid

A fluid dynamics theory that violates causality would always generate paradoxical instabilities—a result that could guide the search for a theory for relativistic fluids.

The theory of fluid dynamics has been successful in many areas of fundamental and applied sciences, describing fluids from dilute gases, such as air, to liquids, such as water. For most nonrelativistic fluids, the theory takes the form of the celebrated Navier-Stokes equation. However, fundamental problems arise when extending these equations to relativistic fluids. Such extensions typically imply paradoxes—for instance, thermodynamic states of the systems can appear stable or unstable to observers in different frames of reference. These problems hinder the description of the dynamics of important fluid systems, such as neutron-rich matter in neutron star mergers or the quark-gluon plasma produced in heavy-ion collisions.

Uncovering Hidden Patterns: AI Reduces a 100,000-Equation Quantum Physics Problem to Only Four Equations

Scientists trained a machine learning tool to capture the physics of electrons moving on a lattice using far fewer equations than would typically be required, all without sacrificing accuracy. A daunting quantum problem that until now required 100,000 equations has been compressed into a bite-size task of as few as four equations by physicists using artificial intelligence. All of this was accomplished without sacrificing accuracy. The work could revolutionize how scientists investigate systems containing many interacting electrons. Furthermore, if scalable to other problems, the approach could potentially aid in the design of materials with extremely valuable properties such as superconductivity or utility for clean energy generation.

How to choose the right NLP solution

Were you unable to attend Transform 2022? Check out all of the summit sessions in our on-demand library now! Watch here.

For decades, enterprises have jury-rigged software designed for structured data when trying to solve unstructured, text-based data problems. Although these solutions performed poorly, there was nothing else. Recently, though, machine learning (ML) has improved significantly at understanding natural language.

Unsurprisingly, Silicon Valley is in a mad dash to build market-leading offerings for this new opportunity. Khosla Ventures thinks natural language processing (NLP) is the most important technology trend of the next five years. If the 2000s were about becoming a big data-enabled enterprise, and the 2010s were about becoming a data science-enabled enterprise — then the 2020s are about becoming a natural language-enabled enterprise.

Machine learning helps scientists peer (a second) into the future

The past may be a fixed and immutable point, but with the help of machine learning, the future can at times be more easily divined.

Using a new type of machine learning method called next generation reservoir computing, researchers at The Ohio State University have recently found a new way to predict the behavior of spatiotemporal chaotic systems—such as changes in Earth’s weather—that are particularly complex for scientists to forecast.

The study, published today in the journal Chaos: An Interdisciplinary Journal of Nonlinear Science, utilizes a new and highly that, when combined with next generation reservoir computing, can learn spatiotemporal chaotic systems in a fraction of the time of other machine learning algorithms.

This Cyber Security Service Utilizes Artificial Intelligence

This post is also available in: he עברית (Hebrew)

As everyday technologies get more and more advanced, cyber security must be at the forefront of every customer. Cyber security services have become common and are often used by private companies and the public sector in order to protect themselves from potential cyber attacks.

One of these services goes under the name Darktrace and has recently been acquired by Cybersprint, a Dutch provider of advanced cyber security services and a manufacturer of special tools that use machine learning algorithms to detect cyber vulnerabilities. Based on attack path modeling and graph theory, Darktrace’s platform represents organizational networks as directional, weighted graphs with nodes where multi-line segments meet and edges where they join. In order to estimate the probability that an attacker will be able to successfully move from node A to node B, a weighted graph can be used. Understanding the insights gained will make it easier for Darktrace to simulate future attacks.

Bioinspired robots walk, swim, slither and fly

Such robotic schools could be tasked with locating and recording data on coral reefs to help researchers to study the reefs’ health over time. Just as living fish in a school might engage in different behaviours simultaneously — some mating, some caring for young, others finding food — but suddenly move as one when a predator approaches, robotic fish would have to perform individual tasks while communicating to each other when it’s time to do something different.

“The majority of what my lab really looks at is the coordination techniques — what kinds of algorithms have evolved in nature to make systems work well together?” she says.

Many roboticists are looking to biology for inspiration in robot design, particularly in the area of locomotion. Although big industrial robots in vehicle factories, for instance, remain anchored in place, other robots will be more useful if they can move through the world, performing different tasks and coordinating their behaviour.