Toggle light / dark theme

‘Sensational breakthrough’ marks step toward revealing hidden structure of prime numbers

face_with_colon_three steps towards infinity getting much closer to the solution with reinmans hypothesis: D.


Just as molecules are composed of atoms, in math, every natural number can be broken down into its prime factors—those that are divisible only by themselves and 1. Mathematicians want to understand how primes are distributed along the number line, in the hope of revealing an organizing principle for the atoms of arithmetic.

“At first sight, they look pretty random,” says James Maynard, a mathematician at the University of Oxford. “But actually, there’s believed to be this hidden structure within the prime numbers.”

For 165 years, mathematicians seeking that structure have focused on the Riemann hypothesis. Proving it would offer a Rosetta Stone for decoding the primes—as well as a $1 million award from the Clay Mathematics Institute. Now, in a preprint posted online on 31 May, Maynard and Larry Guth of the Massachusetts Institute of Technology have taken a step in this direction by ruling out certain exceptions to the Riemann hypothesis. The result is unlikely to win the cash prize, but it represents the first progress in decades on a major knot in math’s biggest unsolved problem, and it promises to spark new advances throughout number theory.

Google DeepMind’s new AI systems can now solve complex math problems

AI models can easily generate essays and other types of text. However, they’re nowhere near as good at solving math problems, which tend to involve logical reasoning—something that’s beyond the capabilities of most current AI systems.

But that may finally be changing. Google DeepMind says it has trained two specialized AI systems to solve complex math problems involving advanced reasoning. The systems—called AlphaProof and AlphaGeometry 2—worked together to successfully solve four out of six problems from this year’s International Mathematical Olympiad (IMO), a prestigious competition for high school students. They won the equivalent of a silver medal.

Balancing instability and robustness: New mathematical framework for dynamics of natural systems

Scientists all over the world use modeling approaches to understand complex natural systems such as climate systems or neuronal or biochemical networks. A team of researchers has now developed a new mathematical framework that explains, for the first time, a mechanism behind long transient behaviors in complex systems.

The Mysteries of Physics, Dualities, M theory, and the Emergent Nature of Space-time

In this thought-provoking exploration, we delve into the profound reflections of Edward Witten, a leading figure in theoretical physics. Join us as we navigate the complexities of dualities, the enigmatic nature of M-theory, and the intriguing concept of emergent space-time. Witten, the only physicist to win the prestigious Fields Medal, offers deep insights into the mathematical and physical mysteries that shape our understanding of reality. From the holographic principle to the elusive (2,0) theory, we uncover how these advanced theories interconnect and challenge our conventional perceptions. This journey is not just a deep dive into high-level physics but a philosophical quest to grasp the nature of existence itself. Read the full interview here: https://www.quantamagazine.org/edward

#EdwardWitten #TheoreticalPhysics #StringTheory #QuantumFieldTheory #MTheory.

Become a member of this channel to enjoy benefits:
/ @artificiallyaware

Did abstract mathematics exist before the big bang?

Did abstract mathematics, such as Pythagoras’s theorem, exist before the big bang?

Simon McLeish Lechlade, Gloucestershire, UK

The notion of the existence of mathematical ideas is a complex one.

One way to look at it is that mathematics is about the use of logical thought to derive information, often information about other mathematical ideas. The use of objective logic should mean that mathematical ideas are eternal: they have always been, and always will be.

Network properties determine neural network performance

Machine learning influences numerous aspects of modern society, empowers new technologies, from Alphago to ChatGPT, and increasingly materializes in consumer products such as smartphones and self-driving cars. Despite the vital role and broad applications of artificial neural networks, we lack systematic approaches, such as network science, to understand their underlying mechanism. The difficulty is rooted in many possible model configurations, each with different hyper-parameters and weighted architectures determined by noisy data. We bridge the gap by developing a mathematical framework that maps the neural network’s performance to the network characters of the line graph governed by the edge dynamics of stochastic gradient descent differential equations. This framework enables us to derive a neural capacitance metric to universally capture a model’s generalization capability on a downstream task and predict model performance using only early training results. The numerical results on 17 pre-trained ImageNet models across five benchmark datasets and one NAS benchmark indicate that our neural capacitance metric is a powerful indicator for model selection based only on early training results and is more efficient than state-of-the-art methods.

Using AI to train AI: Model collapse could be coming for LLMs, say researchers

Using AI-generated datasets to train future generations of machine learning models may pollute their output, a concept known as model collapse, according to a new paper published in Nature. The research shows that within a few generations, original content is replaced by unrelated nonsense, demonstrating the importance of using reliable data to train AI models.

Generative AI tools such as (LLMs) have grown in popularity and have been primarily trained using human-generated inputs. However, as these AI models continue to proliferate across the Internet, computer-generated content may be used to train other AI models—or themselves—in a recursive loop.

Ilia Shumailov and colleagues present mathematical models to illustrate how AI models may experience model collapse. The authors demonstrate that an AI may overlook certain outputs (for example, less common lines of text) in training data, causing it to train itself on only a portion of the dataset.

The Clinical, Philosophical, Evolutionary and Mathematical Machinery of Consciousness: An Analytic Dissection of the Field Theories and a Consilience of Ideas

The Cartesian model of mind-body dualism concurs with religious traditions. However, science has supplanted this idea with an energy-matter theory of consciousness, where matter is equivalent to the body and energy replaces the mind or soul. This equivalency is analogous to the concept of the interchange of mass and energy as expressed by Einstein’s famous equation [Formula: see text]. Immanuel Kant, in his Critique of Pure Reason, provided the intellectual and theoretical framework for a theory of mind or consciousness. Any theory of consciousness must include the fact that a conscious entity, as far as is known, is a wet biological medium (the brain), of stupendously high entropy. This organ or entity generates a field that must account for the “binding problem”, which we will define. This proposed field, the conscious electro-magnetic information (CEMI) field, also has physical properties, which we will outline. We will also demonstrate the seamless transition of the Kantian philosophy of the a priori conception of space and time, the organs of perception and conception, into the CEMI field of consciousness. We will explore the concept of the CEMI field and its neurophysiological correlates, and in particular, synchronous and coherent gamma oscillations of various neuronal ensembles, as in William J Freeman’s experiments in the early 1970s with olfactory perception in rabbits. The expansion of the temporo-parietal-occipital (TPO) cortex in hominid evolution epitomizes metaphorical and abstract thinking. This area of the cortex, with synchronous thalamo-cortical oscillations has the best fit for a minimal neural correlate of consciousness. Our field theory shifts consciousness from an abstract idea to a tangible energy with defined properties and a mathematical framework. Even further, it is not a coincidence that the cerebral cortex is very thin with respect to the diameter of the brain. This is in keeping with its fantastically high entropy, as we see in the event horizon of a black hole and the conformal field theory/anti-de Sitter (CFT/ADS) holographic model of the universe. We adumbrate the uniqueness of consciousness of an advanced biological system such as the human brain and draw insight from Avicenna’s gendanken, floating man thought experiment. The multi-system high volume afferentation of a biological wet system honed after millions of years of evolution, its high entropy, and the CEMI field variation inducing currents in motor output pathways are proposed to spark the seeds of consciousness. We will also review Karl Friston’s free energy principle, the concept of belief-update in a Bayesian inference framework, the minimization of the divergence of prior and posterior probability distributions, and the entropy of the brain. We will streamline these highly technical papers, which view consciousness as a minimization principle akin to Hilbert’s action in deriving Einstein’s field equation or Feynman’s sum of histories in quantum mechanics. Consciousness here is interpreted as flow of probability densities on a Riemmanian manifold, where the gradient of ascent on this manifold across contour lines determines the magnitude of perception or the degree of update of the belief-system in a Bayesian inference model. Finally, the science of consciousness has transcended metaphysics and its study is now rooted in the latest advances of neurophysiology, neuro-radiology under the aegis of mathematics.

Keywords: anatomy & physiology; brain anatomy; disorders of consciousness; philosophy.

Copyright © 2020, Kesserwani et al.