Toggle light / dark theme

face_with_colon_three circa 2016.


Two basic types of encryption schemes are used on the internet today. One, known as symmetric-key cryptography, follows the same pattern that people have been using to send secret messages for thousands of years. If Alice wants to send Bob a secret message, they start by getting together somewhere they can’t be overheard and agree on a secret key; later, when they are separated, they can use this key to send messages that Eve the eavesdropper can’t understand even if she overhears them. This is the sort of encryption used when you set up an online account with your neighborhood bank; you and your bank already know private information about each other, and use that information to set up a secret password to protect your messages.

The second scheme is called public-key cryptography, and it was invented only in the 1970s. As the name suggests, these are systems where Alice and Bob agree on their key, or part of it, by exchanging only public information. This is incredibly useful in modern electronic commerce: if you want to send your credit card number safely over the internet to Amazon, for instance, you don’t want to have to drive to their headquarters to have a secret meeting first. Public-key systems rely on the fact that some mathematical processes seem to be easy to do, but difficult to undo. For example, for Alice to take two large whole numbers and multiply them is relatively easy; for Eve to take the result and recover the original numbers seems much harder.

Public-key cryptography was invented by researchers at the Government Communications Headquarters (GCHQ) — the British equivalent (more or less) of the US National Security Agency (NSA) — who wanted to protect communications between a large number of people in a security organization. Their work was classified, and the British government neither used it nor allowed it to be released to the public. The idea of electronic commerce apparently never occurred to them. A few years later, academic researchers at Stanford and MIT rediscovered public-key systems. This time they were thinking about the benefits that widespread cryptography could bring to everyday people, not least the ability to do business over computers.

Los Alamos National Laboratory researchers have developed a novel method for comparing neural networks that looks into the “black box” of artificial intelligence to help researchers comprehend neural network behavior. Neural networks identify patterns in datasets and are utilized in applications as diverse as virtual assistants, facial recognition systems, and self-driving vehicles.

“The artificial intelligence research community doesn’t necessarily have a complete understanding of what neural networks are doing; they give us good results, but we don’t know how or why,” said Haydn Jones, a researcher in the Advanced Research in Cyber Systems group at Los Alamos. “Our new method does a better job of comparing neural networks, which is a crucial step toward better understanding the mathematics behind AI.”

For the better part of a century, quantum physics and the general theory of relativity have been a marriage on the rocks. Each perfect in their own way, the two just can’t stand each other when in the same room.

Now a mathematical proof on the quantum nature of black holes just might show us how the two can reconcile, at least enough to produce a grand new theory on how the Universe works on cosmic and microcosmic scales.

A team of physicists has mathematically demonstrated a weird quirk concerning how these mind-bendingly dense objects might exist in a state of quantum superposition, simultaneously occupying a spectrum of possible characteristics.

HOW did our universe begin? This is among the most profound questions of all, and you would be forgiven for thinking it is impossible to answer. But Laura Mersini-Houghton says she has cracked it. A cosmologist at the University of North Carolina at Chapel Hill, she was born and raised under communist dictatorship in Albania, where her father was considered ideologically opposed to the regime and exiled. She later won a Fulbright scholarship to study in the US, forging a career in cosmology in which she has tackled the origins of the universe – and made an extraordinary proposal.

Mersini-Houghton’s big idea is that the universe in its earliest moments can be understood as a quantum wave function – a mathematical description of a haze of possibilities – that gave rise to many diverse universes as well as our own. She has also made predictions about how other universes would leave an imprint upon our own. Those ideas have been controversial, with some physicists arguing that her predictions are invalid. But Mersini-Houghton argues that they have been confirmed by observations of the radiation left over from the big bang, known as the cosmic microwave background.

Associate Professor of the Department of Information Technologies and Computer Sciences at MISIS University, Ph.D., mathematician and doctor Alexandra Bernadotte has developed algorithms that significantly increase the accuracy of recognition of mental commands by robotic devices. The result is achieved by optimizing the selection of a dictionary. Algorithms implemented in robotic devices can be used to transmit information through noisy communication channels. The results have been published in the peer-reviewed international scientific journal Mathematics.

The task of improving the object (audio, video or electromagnetic signals) classification accuracy, when compiling so-called “dictionaries” of devices is faced by developers of different systems aimed to improve the quality of human life.

The simplest example is a voice assistant. Audio or video transmission devices for remote control of an object in the line-of-sight zone use a limited set of commands. At the same time, it is important that the commands classifier based on the accurately understands and does not confuse the commands included in the device dictionary. It also means that the recognition accuracy should not fall below a certain value in the presence of extraneous noise.

An artificial intelligence system from Google’s sibling company DeepMind stumbled on a new way to solve a foundational math problem at the heart of modern computing, a new study finds. A modification of the company’s game engine AlphaZero (famously used to defeat chess grandmasters and legends in the game of Go) outperformed an algorithm that had not been improved on for more than 50 years, researchers say.

The new research focused on multiplying grids of numbers known as matrices. Matrix multiplication is an operation key to many computational tasks, such as processing images, recognizing speech commands, training neural networks, running simulations to predict the weather, and compressing data for sharing on the Internet.

In February 2019, JQI Fellow Alicia Kollár, who is also an assistant professor of physics at UMD, bumped into Adrian Chapman, then a postdoctoral fellow at the University of Sydney, at a quantum information conference. Although the two came from very different scientific backgrounds, they quickly discovered that their research had a surprising commonality. They both shared an interest in graph theory, a field of math that deals with points and the connections between them.

Chapman found graphs through his work in —a field that deals with protecting fragile quantum information from errors in an effort to build ever-larger quantum computers. He was looking for new ways to approach a long-standing search for the Holy Grail of quantum error correction: a way of encoding quantum information that is resistant to errors by construction and doesn’t require active correction. Kollár had been pursuing new work in graph theory to describe her photon-on-a-chip experiments, but some of her results turned out to be the missing piece in Chapman’s puzzle.

Their ensuing collaboration resulted in a new tool that aids in the search for new quantum error correction schemes—including the Holy Grail of self-correcting quantum error correction. They published their findings recently in the journal Physical Review X Quantum.

HOUSTON, Oct. 18, 2022 – The nonprofit Space Center Houston is advancing a Facilities Master Plan to support the growing need for space exploration learning and training in two massive structures that will also give the public a front row seat into the development of robotics, rovers, lunar landers and reduced gravity systems. Today, the center offered a glimpse of the facility that will include two enclosed simulated cosmic terrains of the Moon and Mars, as well as modular surface labs and STEM learning centers. An elevated exhibit hall over the two surfaces will offer the public immersive experiences to observe astronaut training first-hand while experiencing the future of space exploration as humans return to the Moon and eventually on to Mars.

Space Center Houston is responding to the opportunities and challenges in a rapidly evolving space sector, including the need for facilities built for current and future missions, while sharing this excitement with the public and addressing critical gaps in the development of the STEM workforce through its education programs. The facility will bring together guests, NASA, commercial space partners, colleges, universities and global space agencies to collaborate on new technologies that are propelling present and future human spaceflight.

For 30 years, Space Center Houston has chronicled the journey of human spaceflight while empowering and inspiring people to pursue careers in science, technology, engineering and mathematics. “Space is expanding once again and a new space age is upon us,” said William T. Harris, President and CEO Space Center Houston. “With new ambitions, new players and new challenges, we will shift our focus from being a curator of past achievements to also facilitating new feats in space.”