Toggle light / dark theme

An in-depth survey of the various technologies for spaceship propulsion, both from those we can expect to see in a few years and those at the edge of theoretical science. We’ll break them down to basics and familiarize ourselves with the concepts.
Note: I made a rather large math error about the Force per Power the EmDrive exerts at 32:10, initial tentative results for thrust are a good deal higher than I calculated compared to a flashlight.

Visit the sub-reddit:
https://www.reddit.com/r/IsaacArthur/

Join the Facebook Group:
https://www.facebook.com/groups/1583992725237264/

Visit our Website:

His work will be published soon.

Shanghai-born Zhang Yitang is a professor of mathematics at the University of California, Santa Barbara. If a 111-page manuscript allegedly written by him passes peer review, he might become the first person to solve the Riemann hypothesis, The South China Morning Post (SCMP)

The Riemann hypothesis is a 150-year-old puzzle that is considered by the community to be the holy grail of mathematics. Published in 1,859, it is a fascinating piece of mathematical conjecture around prime numbers and how they can be predicted.

Riemann hypothesized that prime numbers do not occur erratically but rather follow the frequency of an elaborate function, which is called the Riemann zeta function. Using this function, one can reliably predict where prime numbers occur, but more than a century later, no mathematician has been able to prove this hypothesis.

face_with_colon_three circa 2016.


Two basic types of encryption schemes are used on the internet today. One, known as symmetric-key cryptography, follows the same pattern that people have been using to send secret messages for thousands of years. If Alice wants to send Bob a secret message, they start by getting together somewhere they can’t be overheard and agree on a secret key; later, when they are separated, they can use this key to send messages that Eve the eavesdropper can’t understand even if she overhears them. This is the sort of encryption used when you set up an online account with your neighborhood bank; you and your bank already know private information about each other, and use that information to set up a secret password to protect your messages.

The second scheme is called public-key cryptography, and it was invented only in the 1970s. As the name suggests, these are systems where Alice and Bob agree on their key, or part of it, by exchanging only public information. This is incredibly useful in modern electronic commerce: if you want to send your credit card number safely over the internet to Amazon, for instance, you don’t want to have to drive to their headquarters to have a secret meeting first. Public-key systems rely on the fact that some mathematical processes seem to be easy to do, but difficult to undo. For example, for Alice to take two large whole numbers and multiply them is relatively easy; for Eve to take the result and recover the original numbers seems much harder.

Public-key cryptography was invented by researchers at the Government Communications Headquarters (GCHQ) — the British equivalent (more or less) of the US National Security Agency (NSA) — who wanted to protect communications between a large number of people in a security organization. Their work was classified, and the British government neither used it nor allowed it to be released to the public. The idea of electronic commerce apparently never occurred to them. A few years later, academic researchers at Stanford and MIT rediscovered public-key systems. This time they were thinking about the benefits that widespread cryptography could bring to everyday people, not least the ability to do business over computers.

Los Alamos National Laboratory researchers have developed a novel method for comparing neural networks that looks into the “black box” of artificial intelligence to help researchers comprehend neural network behavior. Neural networks identify patterns in datasets and are utilized in applications as diverse as virtual assistants, facial recognition systems, and self-driving vehicles.

“The artificial intelligence research community doesn’t necessarily have a complete understanding of what neural networks are doing; they give us good results, but we don’t know how or why,” said Haydn Jones, a researcher in the Advanced Research in Cyber Systems group at Los Alamos. “Our new method does a better job of comparing neural networks, which is a crucial step toward better understanding the mathematics behind AI.”

For the better part of a century, quantum physics and the general theory of relativity have been a marriage on the rocks. Each perfect in their own way, the two just can’t stand each other when in the same room.

Now a mathematical proof on the quantum nature of black holes just might show us how the two can reconcile, at least enough to produce a grand new theory on how the Universe works on cosmic and microcosmic scales.

A team of physicists has mathematically demonstrated a weird quirk concerning how these mind-bendingly dense objects might exist in a state of quantum superposition, simultaneously occupying a spectrum of possible characteristics.

HOW did our universe begin? This is among the most profound questions of all, and you would be forgiven for thinking it is impossible to answer. But Laura Mersini-Houghton says she has cracked it. A cosmologist at the University of North Carolina at Chapel Hill, she was born and raised under communist dictatorship in Albania, where her father was considered ideologically opposed to the regime and exiled. She later won a Fulbright scholarship to study in the US, forging a career in cosmology in which she has tackled the origins of the universe – and made an extraordinary proposal.

Mersini-Houghton’s big idea is that the universe in its earliest moments can be understood as a quantum wave function – a mathematical description of a haze of possibilities – that gave rise to many diverse universes as well as our own. She has also made predictions about how other universes would leave an imprint upon our own. Those ideas have been controversial, with some physicists arguing that her predictions are invalid. But Mersini-Houghton argues that they have been confirmed by observations of the radiation left over from the big bang, known as the cosmic microwave background.

Associate Professor of the Department of Information Technologies and Computer Sciences at MISIS University, Ph.D., mathematician and doctor Alexandra Bernadotte has developed algorithms that significantly increase the accuracy of recognition of mental commands by robotic devices. The result is achieved by optimizing the selection of a dictionary. Algorithms implemented in robotic devices can be used to transmit information through noisy communication channels. The results have been published in the peer-reviewed international scientific journal Mathematics.

The task of improving the object (audio, video or electromagnetic signals) classification accuracy, when compiling so-called “dictionaries” of devices is faced by developers of different systems aimed to improve the quality of human life.

The simplest example is a voice assistant. Audio or video transmission devices for remote control of an object in the line-of-sight zone use a limited set of commands. At the same time, it is important that the commands classifier based on the accurately understands and does not confuse the commands included in the device dictionary. It also means that the recognition accuracy should not fall below a certain value in the presence of extraneous noise.

An artificial intelligence system from Google’s sibling company DeepMind stumbled on a new way to solve a foundational math problem at the heart of modern computing, a new study finds. A modification of the company’s game engine AlphaZero (famously used to defeat chess grandmasters and legends in the game of Go) outperformed an algorithm that had not been improved on for more than 50 years, researchers say.

The new research focused on multiplying grids of numbers known as matrices. Matrix multiplication is an operation key to many computational tasks, such as processing images, recognizing speech commands, training neural networks, running simulations to predict the weather, and compressing data for sharing on the Internet.

In February 2019, JQI Fellow Alicia Kollár, who is also an assistant professor of physics at UMD, bumped into Adrian Chapman, then a postdoctoral fellow at the University of Sydney, at a quantum information conference. Although the two came from very different scientific backgrounds, they quickly discovered that their research had a surprising commonality. They both shared an interest in graph theory, a field of math that deals with points and the connections between them.

Chapman found graphs through his work in —a field that deals with protecting fragile quantum information from errors in an effort to build ever-larger quantum computers. He was looking for new ways to approach a long-standing search for the Holy Grail of quantum error correction: a way of encoding quantum information that is resistant to errors by construction and doesn’t require active correction. Kollár had been pursuing new work in graph theory to describe her photon-on-a-chip experiments, but some of her results turned out to be the missing piece in Chapman’s puzzle.

Their ensuing collaboration resulted in a new tool that aids in the search for new quantum error correction schemes—including the Holy Grail of self-correcting quantum error correction. They published their findings recently in the journal Physical Review X Quantum.