Toggle light / dark theme

Improving the efficiency of algorithms for fundamental computations is a crucial task nowadays as it influences the overall pace of a large number of computations that might have a significant impact. One such simple task is matrix multiplication, which can be found in systems like neural networks and scientific computing routines. Machine learning has the potential to go beyond human intuition and beat the most exemplary human-designed algorithms currently available. However, due to the vast number of possible algorithms, this process of automated algorithm discovery is complicated. DeepMind recently made a breakthrough discovery by developing AplhaTensor, the first-ever artificial intelligence (AI) system for developing new, effective, and indubitably correct algorithms for essential operations like matrix multiplication. Their approach answers a mathematical puzzle that has been open for over 50 years: how to multiply two matrices as quickly as possible.

AlphaZero, an agent that showed superhuman performance in board games like chess, go, and shogi, is the foundation upon which AlphaTensor is built. The system expands on AlphaZero’s progression from playing traditional games to solving complex mathematical problems for the first time. The team believes this study represents an important milestone in DeepMind’s objective to improve science and use AI to solve the most fundamental problems. The research has also been published in the established Nature journal.

Matrix multiplication has numerous real-world applications despite being one of the most simple algorithms taught to students in high school. This method is utilized for many things, including processing images on smartphones, identifying verbal commands, creating graphics for video games, and much more. Developing computing hardware that multiplies matrices effectively consumes many resources; therefore, even small gains in matrix multiplication efficiency can have a significant impact. The study investigates how the automatic development of new matrix multiplication algorithms could be advanced by using contemporary AI approaches. In order to find algorithms that are more effective than the state-of-the-art for many matrix sizes, AlphaTensor further leans on human intuition. Its AI-designed algorithms outperform those created by humans, which represents a significant advancement in algorithmic discovery.

Non-scientific versions of the answer have invoked many gods and have been the basis of all religions and most philosophy since the beginning of recorded time.

Now a team of mathematicians from Canada and Egypt have used cutting edge scientific theory and a mind-boggling set of equations to work out what preceded the universe in which we live.

In (very) simple terms they applied the theories of the very small – the world of quantum mechanics – to the whole universe – explained by general theory of relativity, and discovered the universe basically goes though four different phases.

A physicist from the University of Campinas in Brazil isn’t a big fan of the idea that time started with a so-called Big Bang. So Instead, Juliano César Silva Neves imagines a collapse followed by a sudden expansion, one that could even still carry the scars of a previous timeline.

Updated version of the previous article.

The idea itself isn’t new, but Neves has used a fifty-year-old mathematical trick describing black holes to show how our Universe needn’t have had such a compact start to existence. At first glance, our Universe doesn’t seem to have a lot in common with black holes. One is expanding space full of clumpy bits; the other is mass pulling at space so hard that even light has no hope of escape. But at the heart of both lies a concept known as a singularity – a volume of energy so infinitely dense, we can’t even begin to explain what’s going on inside it.

Algorithms have helped mathematicians perform fundamental operations for thousands of years. The ancient Egyptians created an algorithm to multiply two numbers without requiring a multiplication table, and Greek mathematician Euclid described an algorithm to compute the greatest common divisor, which is still in use today.

During the Islamic Golden Age, Persian mathematician Muhammad ibn Musa al-Khwarizmi designed new algorithms to solve linear and quadratic equations. In fact, al-Khwarizmi’s name, translated into Latin as Algoritmi, led to the term algorithm. But, despite the familiarity with algorithms today – used throughout society from classroom algebra to cutting edge scientific research – the process of discovering new algorithms is incredibly difficult, and an example of the amazing reasoning abilities of the human mind.

In our paper, published today in Nature, we introduce AlphaTensor, the first artificial intelligence (AI) system for discovering novel, efficient, and provably correct algorithms for fundamental tasks such as matrix multiplication. This sheds light on a 50-year-old open question in mathematics about finding the fastest way to multiply two matrices.

The rise of quantum computing and its implications for current encryption standards are well known. But why exactly should quantum computers be especially adept at breaking encryption? The answer is a nifty bit of mathematical juggling called Shor’s algorithm. The question that still leaves is: What is it that this algorithm does that causes quantum computers to be so much better at cracking encryption? In this video, YouTuber minutephysics explains it in his traditional whiteboard cartoon style.

“Quantum computation has the potential to make it super, super easy to access encrypted data — like having a lightsaber you can use to cut through any lock or barrier, no matter how strong,” minutephysics says. “Shor’s algorithm is that lightsaber.”

According to the video, Shor’s algorithm works off the understanding that for any pair of numbers, eventually multiplying one of them by itself will reach a factor of the other number plus or minus 1. Thus you take a guess at the first number and factor it out, adding and subtracting 1, until you arrive at the second number. That would unlock the encryption (specifically RSA here, but it works on some other types) because we would then have both factors.

Juncal Arbelaiz Mugica is a native of Spain, where octopus is a common menu item. However, Arbelaiz appreciates octopus and similar creatures in a different way, with her research into soft-robotics theory.

More than half of an octopus’ nerves are distributed through its eight arms, each of which has some degree of autonomy. This distributed sensing and information processing system intrigued Arbelaiz, who is researching how to design decentralized intelligence for human-made systems with embedded sensing and computation. At MIT, Arbelaiz is an applied math student who is working on the fundamentals of optimal distributed control and estimation in the final weeks before completing her PhD this fall.

She finds inspiration in the biological intelligence of invertebrates such as octopus and jellyfish, with the ultimate goal of designing novel control strategies for flexible “soft” robots that could be used in tight or delicate surroundings, such as a surgical tool or for search-and-rescue missions.

Training the large neural networks behind many modern AI tools requires real computational might: For example, OpenAI’s most advanced language model, GPT-3, required an astounding million billion billions of operations to train, and cost about US $5 million in compute time. Engineers think they have figured out a way to ease the burden by using a different way of representing numbers.

Back in 2017, John Gustafson, then jointly appointed at A*STAR Computational Resources Centre and the National University of Singapore, and Isaac Yonemoto, then at Interplanetary Robot and Electric Brain Co., developed a new way of representing numbers. These numbers, called posits, were proposed as an improvement over the standard floating-point arithmetic processors used today.

Now, a team of researchers at the Complutense University of Madrid have developed the first processor core implementing the posit standard in hardware and showed that, bit-for-bit, the accuracy of a basic computational task increased by up to four orders of magnitude, compared to computing using standard floating-point numbers. They presented their results at last week’s IEEE Symposium on Computer Arithmetic.

Neural networks are learning algorithms that approximate the solution to a task by training with available data. However, it is usually unclear how exactly they accomplish this. Two young Basel physicists have now derived mathematical expressions that allow one to calculate the optimal solution without training a network. Their results not only give insight into how those learning algorithms work, but could also help to detect unknown phase transitions in physical systems in the future.

Neural networks are based on the principle of operation of the brain. Such computer algorithms learn to solve problems through repeated training and can, for example, distinguish objects or process spoken language.

For several years now, physicists have been trying to use to detect as well. Phase transitions are familiar to us from everyday experience, for instance when water freezes to ice, but they also occur in more complex form between different phases of magnetic materials or , where they are often difficult to detect.