Toggle light / dark theme

With mathematical modeling, a research team has now succeeded in better understanding how the optimal working state of the human brain, called criticality, is achieved. Their results mean an important step toward biologically-inspired information processing and new, highly efficient computer technologies and have been published in Scientific Reports.

“In particular tasks, supercomputers are better than humans, for example in the field of artificial intelligence. But they can’t manage the variety of tasks in —driving a car first, then making music and telling a story at a get-together in the evening,” explains Hermann Kohlstedt, professor of nanoelectronics. Moreover, today’s computers and smartphones still consume an enormous amount of energy.

“These are no sustainable technologies—while our brain consumes just 25 watts in everyday life,” Kohlstedt continues. The aim of their interdisciplinary research network, “Neurotronics: Bio-inspired Information Pathways,” is therefore to develop new electronic components for more energy-efficient computer architectures. For this purpose, the alliance of engineering, life and investigates how the is working and how that has developed.

An in-depth survey of the various technologies for spaceship propulsion, both from those we can expect to see in a few years and those at the edge of theoretical science. We’ll break them down to basics and familiarize ourselves with the concepts.
Note: I made a rather large math error about the Force per Power the EmDrive exerts at 32:10, initial tentative results for thrust are a good deal higher than I calculated compared to a flashlight.

Visit the sub-reddit:
https://www.reddit.com/r/IsaacArthur/

Join the Facebook Group:
https://www.facebook.com/groups/1583992725237264/

Visit our Website:

His work will be published soon.

Shanghai-born Zhang Yitang is a professor of mathematics at the University of California, Santa Barbara. If a 111-page manuscript allegedly written by him passes peer review, he might become the first person to solve the Riemann hypothesis, The South China Morning Post (SCMP)

The Riemann hypothesis is a 150-year-old puzzle that is considered by the community to be the holy grail of mathematics. Published in 1,859, it is a fascinating piece of mathematical conjecture around prime numbers and how they can be predicted.

Riemann hypothesized that prime numbers do not occur erratically but rather follow the frequency of an elaborate function, which is called the Riemann zeta function. Using this function, one can reliably predict where prime numbers occur, but more than a century later, no mathematician has been able to prove this hypothesis.

face_with_colon_three circa 2016.


Two basic types of encryption schemes are used on the internet today. One, known as symmetric-key cryptography, follows the same pattern that people have been using to send secret messages for thousands of years. If Alice wants to send Bob a secret message, they start by getting together somewhere they can’t be overheard and agree on a secret key; later, when they are separated, they can use this key to send messages that Eve the eavesdropper can’t understand even if she overhears them. This is the sort of encryption used when you set up an online account with your neighborhood bank; you and your bank already know private information about each other, and use that information to set up a secret password to protect your messages.

The second scheme is called public-key cryptography, and it was invented only in the 1970s. As the name suggests, these are systems where Alice and Bob agree on their key, or part of it, by exchanging only public information. This is incredibly useful in modern electronic commerce: if you want to send your credit card number safely over the internet to Amazon, for instance, you don’t want to have to drive to their headquarters to have a secret meeting first. Public-key systems rely on the fact that some mathematical processes seem to be easy to do, but difficult to undo. For example, for Alice to take two large whole numbers and multiply them is relatively easy; for Eve to take the result and recover the original numbers seems much harder.

Public-key cryptography was invented by researchers at the Government Communications Headquarters (GCHQ) — the British equivalent (more or less) of the US National Security Agency (NSA) — who wanted to protect communications between a large number of people in a security organization. Their work was classified, and the British government neither used it nor allowed it to be released to the public. The idea of electronic commerce apparently never occurred to them. A few years later, academic researchers at Stanford and MIT rediscovered public-key systems. This time they were thinking about the benefits that widespread cryptography could bring to everyday people, not least the ability to do business over computers.

Los Alamos National Laboratory researchers have developed a novel method for comparing neural networks that looks into the “black box” of artificial intelligence to help researchers comprehend neural network behavior. Neural networks identify patterns in datasets and are utilized in applications as diverse as virtual assistants, facial recognition systems, and self-driving vehicles.

“The artificial intelligence research community doesn’t necessarily have a complete understanding of what neural networks are doing; they give us good results, but we don’t know how or why,” said Haydn Jones, a researcher in the Advanced Research in Cyber Systems group at Los Alamos. “Our new method does a better job of comparing neural networks, which is a crucial step toward better understanding the mathematics behind AI.”

For the better part of a century, quantum physics and the general theory of relativity have been a marriage on the rocks. Each perfect in their own way, the two just can’t stand each other when in the same room.

Now a mathematical proof on the quantum nature of black holes just might show us how the two can reconcile, at least enough to produce a grand new theory on how the Universe works on cosmic and microcosmic scales.

A team of physicists has mathematically demonstrated a weird quirk concerning how these mind-bendingly dense objects might exist in a state of quantum superposition, simultaneously occupying a spectrum of possible characteristics.

HOW did our universe begin? This is among the most profound questions of all, and you would be forgiven for thinking it is impossible to answer. But Laura Mersini-Houghton says she has cracked it. A cosmologist at the University of North Carolina at Chapel Hill, she was born and raised under communist dictatorship in Albania, where her father was considered ideologically opposed to the regime and exiled. She later won a Fulbright scholarship to study in the US, forging a career in cosmology in which she has tackled the origins of the universe – and made an extraordinary proposal.

Mersini-Houghton’s big idea is that the universe in its earliest moments can be understood as a quantum wave function – a mathematical description of a haze of possibilities – that gave rise to many diverse universes as well as our own. She has also made predictions about how other universes would leave an imprint upon our own. Those ideas have been controversial, with some physicists arguing that her predictions are invalid. But Mersini-Houghton argues that they have been confirmed by observations of the radiation left over from the big bang, known as the cosmic microwave background.

Associate Professor of the Department of Information Technologies and Computer Sciences at MISIS University, Ph.D., mathematician and doctor Alexandra Bernadotte has developed algorithms that significantly increase the accuracy of recognition of mental commands by robotic devices. The result is achieved by optimizing the selection of a dictionary. Algorithms implemented in robotic devices can be used to transmit information through noisy communication channels. The results have been published in the peer-reviewed international scientific journal Mathematics.

The task of improving the object (audio, video or electromagnetic signals) classification accuracy, when compiling so-called “dictionaries” of devices is faced by developers of different systems aimed to improve the quality of human life.

The simplest example is a voice assistant. Audio or video transmission devices for remote control of an object in the line-of-sight zone use a limited set of commands. At the same time, it is important that the commands classifier based on the accurately understands and does not confuse the commands included in the device dictionary. It also means that the recognition accuracy should not fall below a certain value in the presence of extraneous noise.

An artificial intelligence system from Google’s sibling company DeepMind stumbled on a new way to solve a foundational math problem at the heart of modern computing, a new study finds. A modification of the company’s game engine AlphaZero (famously used to defeat chess grandmasters and legends in the game of Go) outperformed an algorithm that had not been improved on for more than 50 years, researchers say.

The new research focused on multiplying grids of numbers known as matrices. Matrix multiplication is an operation key to many computational tasks, such as processing images, recognizing speech commands, training neural networks, running simulations to predict the weather, and compressing data for sharing on the Internet.