Toggle light / dark theme

Bernoulli trial

A #mathematics “A Bernoulli trial is a #random experiment with exactly two possible outcomes “success” and “failure” in which #probability of success is the same every time the experiment is conducted.”


In the theory of probability and statistics, a Bernoulli trial (or binomial trial) is a random experiment with exactly two possible outcomes, “success” and “failure”, in which the probability of success is the same every time the experiment is conducted.[1] It is named after Jacob Bernoulli, a 17th-century Swiss mathematician, who analyzed them in his Ars Conjectandi (1713).[2]

The mathematical formalisation of the Bernoulli trial is known as the Bernoulli process. This article offers an elementary introduction to the concept, whereas the article on the Bernoulli process offers a more advanced treatment.

Since a Bernoulli trial has only two possible outcomes, it can be framed as some “yes or no” question. For example:

A method to straighten curved space-time

One of the greatest challenges of modern physics is to find a coherent method for describing phenomena, on the cosmic and microscale. For over a hundred years, to describe reality on a cosmic scale we have been using general relativity theory, which has successfully undergone repeated attempts at falsification.

Albert Einstein curved space-time to describe gravity, and despite still-open questions about or , it seems, today, to be the best method of analyzing the past and future of the universe.

To describe phenomena on the scale of atoms, we use the second great theory: , which differs from general relativity in basically everything. It uses flat space-time and a completely different mathematical apparatus, and most importantly, perceives reality radically differently.

Scientists fuse brain-like tissue with electronics to make computer

Scientists have fused brain-like tissue with electronics to make an ‘organoid neural network’ that can recognise voices and solve a complex mathematical problem. Their invention extends neuromorphic computing – the practice of modelling computers after the human brain – to a new level by directly including brain tissue in a computer.

The system was developed by a team of researchers from Indiana University, Bloomington; the University of Cincinnati and Cincinnati Children’s Hospital Medical Centre, Cincinnati; and the University of Florida, Gainesville. Their findings were published on December 11.

Neurons in The Brain Appear to Follow a Distinct Mathematical Pattern

Researchers taking part in the Human Brain Project have identified a mathematical rule that governs the distribution of neurons in our brains.

The rule predicts how neurons are distributed in different parts of the brain, and could help scientists create precise models to understand how the brain works and develop new treatments for neurological diseases.

In the wonderful world of statistics, if you consider any continuous random variable, the logarithm of that variable will often follow what’s known as a lognormal distribution. Defined by the mean and standard deviation, it can be visualized as a bell-shaped curve, only with the curve being wider than what you’d find in a normal distribution.

ChatGPT often won’t defend its answers, even when it is right

ChatGPT may do an impressive job at correctly answering complex questions, but a new study suggests it may be absurdly easy to convince the AI chatbot that it’s in the wrong.

A team at Ohio State University challenged (LLMs) like ChatGPT to a variety of debate-like conversations in which a user pushed back when the chatbot presented a correct answer.

Through experimenting with a broad range of reasoning puzzles, including math, common sense, and logic, the study found that when presented with a challenge, the model was often unable to defend its correct beliefs and instead blindly believed invalid arguments made by the user.

Why Quantum Mechanics Defies Physics

The full, weird story of the quantum world is much too large for a single article, but the period from 1905, when Einstein first published his solution to the photoelectric puzzle, to the 1960’s, when a complete, well-tested, rigorous, and insanely complicated quantum theory of the subatomic world finally emerged, is quite the story.

This quantum theory would come to provide, in its own way, its own complete and total revision of our understanding of light. In the quantum picture of the subatomic world, what we call the electromagnetic force is really the product of countless microscopic interactions, the work of indivisible photons, who interact in mysterious ways. As in, literally mysterious. The quantum framework provides no picture as to how subatomic interactions actually proceed. Rather, it merely gives us a mathematical toolset for calculating predictions. And so while we can only answer the question of how photons actually work with a beleaguered shrug, we are at least equipped with some predictive power, which helps assuage the pain of quantum incomprehensibility.

Doing the business of physics – that is, using mathematical models to make predictions to validate against experiment – is rather hard in quantum mechanics. And that’s because of the simple fact that quantum rules are not normal rules, and that in the subatomic realm all bets are off.

Inner Experience — Direct Access to Reality: A Complementarist Ontology and Dual Aspect Monism Support a Broader Epistemology

Ontology, the ideas we have about the nature of reality, and epistemology, our concepts about how to gain knowledge about the world, are interdependent. Currently, the dominant ontology in science is a materialist model, and associated with it an empiricist epistemology. Historically speaking, there was a more comprehensive notion at the cradle of modern science in the middle ages. Then “experience” meant both inner, or first person, and outer, or third person, experience. With the historical development, experience has come to mean only sense experience of outer reality. This has become associated with the ontology that matter is the most important substance in the universe, everything else-consciousness, mind, values, etc.,-being derived thereof or reducible to it. This ontology is insufficient to explain the phenomena we are living with-consciousness, as a precondition of this idea, or anomalous cognitions. These have a robust empirical grounding, although we do not understand them sufficiently. The phenomenology, though, demands some sort of non-local model of the world and one in which consciousness is not derivative of, but coprimary with matter. I propose such a complementarist dual aspect model of consciousness and brain, or mind and matter. This then also entails a different epistemology. For if consciousness is coprimary with matter, then we can also use a deeper exploration of consciousness as happens in contemplative practice to reach an understanding of the deep structure of the world, for instance in mathematical or theoretical intuition, and perhaps also in other areas such as in ethics. This would entail a kind of contemplative science that would also complement our current experiential mode that is exclusively directed to the outside aspect of our world. Such an epistemology might help us with various issues, such as good theoretical and other intuitions.

Keywords: complementarity; consciousness; contemplative science; dual aspect model; epistemology; introspection; materialism; ontology.

Copyright © 2020 Walach.

This Machine Learning Research Opens up a Mathematical Perspective on the Transformers

The release of Transformers has marked a significant advancement in the field of Artificial Intelligence (AI) and neural network topologies. Understanding the workings of these complex neural network architectures requires an understanding of transformers. What distinguishes transformers from conventional architectures is the concept of self-attention, which describes a transformer model’s capacity to focus on distinct segments of the input sequence during prediction. Self-attention greatly enhances the performance of transformers in real-world applications, including computer vision and Natural Language Processing (NLP).

In a recent study, researchers have provided a mathematical model that can be used to perceive Transformers as particle systems in interaction. The mathematical framework offers a methodical way to analyze Transformers’ internal operations. In an interacting particle system, the behavior of the individual particles influences that of the other parts, resulting in a complex network of interconnected systems.

The study explores the finding that Transformers can be thought of as flow maps on the space of probability measures. In this sense, transformers generate a mean-field interacting particle system in which every particle, called a token, follows the vector field flow defined by the empirical measure of all particles. The continuity equation governs the evolution of the empirical measure, and the long-term behavior of this system, which is typified by particle clustering, becomes an object of study.