Toggle light / dark theme

In 1,827, botanist Robert Brown studied pollen particles’ motion as they were suspended in water. These little grains seemed to jitter around randomly. Brown performed as variety of tests on them and realized that all small particles, not just pollen, exhibited the same motion when suspended in water. Something other than the presence of life was causing these little particles to move around. Mathematicians took note and quickly developed a theory describing this process and named it Brownian Motion in his honor.

This theory has expanded well beyond its original context and become a beautiful subfield of mathematics called Stochastic Processes. Nowhere was this influence illustrated better than in 1905 when Albert Einstein used the theory of Brownian Motion to verify the existence of atoms. The makeup of our universe’s tiniest particles was highly debated at the time, and Einstein’s work helped solidify atomic theory.

Wow, that’s quite the leap! In order to understand how we got from pollen grains to confirming atomic theory, we’re going to have to learn some background about Brownian Motion. In this article, I’ll spend some time talking about the basics. This includes some cool videos that demonstrate the patterns of Brownian Motion and the statistics going on behind the scenes. We’ll then dive into Einstein’s version which came as one of his extremely influential series of papers in 1905. There’s a lot of ground to cover, so let’s get started!

The possibility of direct interfacing between biological and technological information devices could result in a merger of mind and machine — Ultimate Computing. This book, a thorough consideration of this idea, involves a number of disciplines, including biochemistry, cognitive science, computer science, engineering, mathematics, microbiology, molecular biology, pharmacology, philosophy, physics, physiology, and psychology.

One of the fundamental and timeless questions of life concerns the mechanics of its inception. Take human development, for example: how do individual cells come together to form complex structures like skin, muscles, bones, or even a brain, a finger, or a spine?

Although the answers to such questions remain unknown, one line of scientific inquiry lies in understanding gastrulation — the stage at which embryo cells develop from a single layer to a multidimensional structure with a main body axis. In humans, gastrulation happens around 14 days after conception.

It’s not possible to study human embryos at this stage, so researchers at the University of California San Diego, the University of Dundee (UK), and Harvard University were able to study gastrulation in chick embryos, which have many similarities to human embryos at this stage.

When Isaac Newton inscribed onto parchment his now-famed laws of motion in 1,687, he could have only hoped we’d be discussing them three centuries later.

Writing in Latin, Newton outlined three universal principles describing how the motion of objects is governed in our Universe, which have been translated, transcribed, discussed and debated at length.

But according to a philosopher of language and mathematics, we might have been interpreting Newton’s precise wording of his first law of motion slightly wrong all along.

(Bloomberg) — Google DeepMind, Alphabet Inc.’s research division, said it has taken a “crucial step” towards making artificial intelligence as capable as humans. It involves solving high-school math problems. Most Read from BloombergWall Street Dials Back Fed Wagers After Solid Data: Markets WrapMusk Pressures Tesla’s Board for Another Massive Stock AwardChina’s Economic Growth Disappoints, Fueling Stimulus CallsChina Population Extends Record Drop on Covid Deaths, Low BirthsApple to Allow Outsi.

Consciousness is one of the most mysterious and fascinating aspects of human existence. It is also one of the most challenging to study scientifically, as it involves subjective experiences that are not directly observable or measurable. David Chalmers, a professor of philosophy and neural science at NYU mentions in his book The Conscious Mind.

“It may be the largest outstanding obstacle in our quest for a scientific understanding of the universe.”

The real questions are: how can we approach the problem of consciousness from a rigorous and objective perspective? Is there a way to quantify and model the phenomena of awareness, feelings, thoughts, and selfhood? There is no definitive answer to this question, but some researchers have attempted to use mathematical tools and methods to study these phenomena. Self-awareness, for instance, is the ability to perceive and understand the things that make you who you are as an individual, such as your personality, actions, values, beliefs, and even thoughts. Some studies have used the mirror test to assess the development of self-awareness in infants and animals.

This is an AI called a Neural Network. But all of the transistors and electronics are replaced with DNA, the molecule of life… all in one test tube.

Papers used for this video.
DNA Neural Networks: https://www.nature.com/articles/s4225
Computation Via DNA: https://www.nature.com/articles/s4159
DNA logic circuits: https://www.nature.com/articles/s4146
Matrices Using DNA: https://onlinelibrary.wiley.com/doi/1

Music:
City Life – Artificial. Music (No Copyright Music)
Link:
Pure Water by Meydän.
Link: • Meydän — Pure Water [Creative Commons…
Forever Sunrise — by Jonny Easton.
Link: • Forever Sunrise — Soft Inspirational…

Softwares used:
Manim CE
Keynote.
Blender.
Molecular Nodes by @BradyJohnston.

0:00 Intro.
1:03 Neural Networks Recap.
3:40 DNA Computing Introduction.
7:20 DNA Transistor.
9:55 DNA Matrix Multiplication.
11:11 Closing Thoughts

On top of this, the use of quantification has significantly increased over the last decades with the inflation of metrics, indicators, and scores to rank and benchmark options (Muller, 2018). The case of energy policy making in the European Union is again an effective example. The European Union’s recent energy strategy has been underpinned by the Clean Energy for all Europeans packages, which are in turn supported by a number of individual directives, each one characterized by a series of quantitative goals (European Commission, 2023). The quantification of the impact (impact assessment) is customarily required to successfully promote new political measures (European Commission, 2015a) and is in turn based on quantification, often from mathematical models (Saltelli et al., 2023). The emphasis on producing exact figures to assess the contribution of a new technology, political or economic measure has put many models and their users into contexts of decision-making that at times extends beyond their original intent (Saltelli, Bammer et al., 2020). At the same time, the efforts to retrospectively assess the performance of energy models have been extremely limited, one example being the Energy Modeling Forum in the United States (Huntington et al., 1982). In spite of this, retrospective assessments can be very helpful in understanding the sources of mismatch between a forecast and the actual figures reported a posteriori (Koomey et al., 2003). For example, long-range forecast models are typically based on the assumption of gradual structural changes, which are at stake with the disruptive events and discontinuities occurring in the real world (Craig et al., 2002). This dimension is especially important in terms of the nature and pace of technology change (Bistline et al., 2023 ; Weyant & Olavson, 1999). A further critical element in this approach is the cognitive bias in scenario analysis that naturally leads to overconfidence in the option being explored and results in an underestimate of the ranges of possible outcomes (Morgan & Keith, 2008).

Additionally, in their quest for capturing the features of the energy systems represented, models have increased their complicatedness and/or complexity. In this context, the need to appraise model uncertainty has become of paramount importance, especially considering the uncertainty due to propagation errors caused by model complexification (Puy et al., 2022). In ecology, this is known as the O’Neil conjecture, which posits a principle of decreasing returns for model complexity when uncertainties come to dominate the output (O’Neill, 1989 ; Turner & Gardner, 2015). Capturing and apportioning uncertainty is crucial for a healthy interaction at the science–policy interface, including energy policy making, because it promotes better informed decision-making. Yet Yue et al. (2018) found that only about 5% of the studies covering energy system optimization models have included some form of assessment of stochastic uncertainty, which is the part of uncertainty that can be fully quantified (Walker et al., 2003). When it comes to adequately apportioning this uncertainty onto the input parameters and hypotheses through sensitivity analysis, the situation is even more critical: Only very few papers in the energy field have made the use of state-of-the-art approaches (Lo Piano & Benini, 2022 ; Saltelli et al., 2019). Further to that, the epistemic part of uncertainty, the one that arises due to imperfect knowledge and problem framing, has been largely ignored in the energy modeling literature (Pye et al., 2018). For instance, important sources of uncertainties associated with regulatory lag and public acceptance have typically been overlooked. 1

In this contribution, we discuss three approaches to deal with the challenges of non-neutrality and uncertainty in models: The numerical unit spread assessment pedigree (NUSAP) method, diagnostic diagrams, and sensitivity auditing (SAUD). These challenges are especially critical when only one (set of) model(s) has been selected to contribute to decision-making. One practical case is used to showcase in retrospective the relevance of the issue and the associated problems: the International Institute for Applied Systems Analysis (IIASA) global modeling in the 1980s.