Toggle light / dark theme

Why Is Anything Conscious?

We tackle the hard problem of consciousness taking the naturally-selected, self-organising, embodied organism as our starting point. We provide a mathematical formalism describing how biological systems self-organise to hierarchically interpret unlabelled sensory information according to valence and specific needs. Such interpretations imply behavioural policies which can only be differentiated from each other by the qualitative aspect of information processing. Selection pressures favour systems that can intervene in the world to achieve homeostatic and reproductive goals. Quality is a property arising in such systems to link cause to affect to motivate real world interventions. This produces a range of qualitative classifiers (interoceptive and exteroceptive) that motivate specific actions and determine priorities and preferences.

The Complex Structure of Quantum Mechanics

I have been thinking for a while about the mathematics used to formulate our physical theories, especially the similarities and differences among different mathematical formulations. This was a focus of my 2021 book, Physics, Structure, and Reality, where I discussed these things in the context of classical and spacetime physics.

Recently this has led me toward thinking about mathematical formulations of quantum mechanics, where an interesting question arises concerning the use of complex numbers. (I recently secured a grant from the National Science Foundation for a project investigating this.)

It is frequently said by physicists that complex numbers are essential to formulating quantum mechanics, and that this is different from the situation in classical physics, where complex numbers appear as a useful but ultimately dispensable calculational tool. It is not often said why, or in what way, complex numbers are supposed to be essential to quantum mechanics as opposed to classical physics.

Denis Noble — Why The Last 80 Years of Biology was Wrong

We’re joined by Dr. Denis Noble, Professor Emeritus of Cardiovascular Physiology at the University of Oxford, and the father of ‘systems biology’. He is known for his groundbreaking creation of the first mathematical model of the heart’s electrical activity in the 1960s which radically transformed our understanding of the heart.

Dr. Noble’s contributions have revolutionized our understanding of cardiac function and the broader field of biology. His work continues to challenge long-standing biological concepts, including gene-centric views like Neo-Darwinism.

In this episode, Dr. Noble discusses his critiques of fundamental biological theories that have shaped science for over 80 years, such as the gene self-replication model and the Weissmann barrier. He advocates for a more holistic, systems-based approach to biology, where genes, cells, and their environments interact in complex networks rather than a one-way deterministic process.

We dive deep into Dr. Noble’s argument that biology needs to move beyond reductionist views, emphasizing that life is more than just the sum of its genetic code. He explains how AI struggles to replicate even simple biological systems, and how biology’s complexity suggests that life’s logic lies not in DNA alone but in the entire organism.

The conversation covers his thoughts on the flaws of Neo-Darwinism, the influence of environmental factors on evolution, and the future of biology as a field that recognizes the interaction between nature and nurture. We also explore the implications of his work for health and longevity, and how common perspectives on genetics might need rethinking.

All the topics we covered in the episode:

Looking back at President Jimmy Carter’s science policy

As President, Jimmy Carter established several science-related initiatives and policies.


Carter also sought to promote scientific research and development in a number of areas. He increased funding for basic science research in fields such as physics and chemistry, and established the National Commission on Excellence in Education to promote improvements in science and math education in American schools.

On top of that, Carter sought to address environmental issues through science policy. He established the Superfund program, which was created to clean up hazardous waste sites, and signed the Alaska National Interest Lands Conservation Act, which protected millions of acres of land in Alaska.

Carter’s science policy emphasized the importance of science and technology in addressing pressing issues such as energy, the environment, and education.

Mitochondrial DNA Evolution: New Study Reveals How Selfish mtDNA Evolve and Thrive

Vanderbilt University researchers, led by alumnus Bryan Gitschlag, have uncovered groundbreaking insights into the evolution of mitochondrial DNA (mtDNA). In their paper in Nature Communications titled “Multiple distinct evolutionary mechanisms govern the dynamics of selfish mitochondrial genomes in Caenorhabditis elegans,” the team reveals how selfish mtDNA, which can reduce the fitness of its host, manages to persist within cells through aggressive competition or by avoiding traditional selection pressures. The study combines mathematical models and experiments to explain the coexistence of selfish and cooperative mtDNA within the cell, offering new insights into the complex evolutionary dynamics of these essential cellular components.

Gitschlag, an alumnus of Vanderbilt University, conducted the research while in the lab of Maulik Patel, assistant professor of biological sciences. He is now a postdoctoral researcher at Cold Spring Harbor Laboratory in David McCandlish’s lab. Gitschlag collaborated closely with fellow Patel Lab members, including James Held, a recent PhD graduate, and Claudia Pereira, a former staff member of the lab.

Mathematicians Surprised By Hidden Fibonacci Numbers

What I believe is that symmetry follows everything even mathematics but what explains it is the Fibonacci equation because it seems to show the grand design of everything much like physics has I believe the final parameter of the quantified parameter of infinity.


Recent explorations of unique geometric worlds reveal perplexing patterns, including the Fibonacci sequence and the golden ratio.

Language agents help large language models ‘think’ better and cheaper

The large language models that have increasingly taken over the tech world are not “cheap” in many ways. The most prominent LLMs, such as GPT-4, took some $100 million to build in the form of legal costs of accessing training data, computational power costs for what could be billions or trillions of parameters, the energy and water needed to fuel computation, and the many coders developing the training algorithms that must run cycle after cycle so the machine will “learn.”

But, if a researcher needs to do a specialized task that a machine could do more efficiently and they don’t have access to a large institution that offers access to generative AI tools, what other options are available? Say, a parent wants to prep their child for a difficult test and needs to show many examples of how to solve complicated math problems.

Building their own LLM is an onerous prospect for costs mentioned above, and making direct use of the big models like GPT-4 and Llama 3.1 might not immediately be suited for the complex in logic and math their task requires.

Research cracks the Autism Code, making the Neurodivergent Brain Visible

Model grounded in biology reveals the tissue structures linked to the disorder. A researcher’s mathematical modeling approach for brain imaging analysis reveals links between genes, brain structure and autism.

A multi-university research team co-led by University of Virginia engineering professor Gustavo K. Rohde has developed a system that can spot genetic markers of autism in brain images with 89 to 95% accuracy.

Their findings suggest doctors may one day see, classify and treat autism and related neurological conditions with this method, without having to rely on, or wait for, behavioral cues. And that means this truly personalized medicine could result in earlier interventions.

OpenAI releases new o1 AI, its first model capable of reasoning

To expand its GPT capabilities, OpenAI released its long-anticipated o1 model, in addition to a smaller, cheaper o1-mini version. Previously known as Strawberry, the company says these releases can “reason through complex tasks and solve harder problems than previous models in science, coding, and math.”

Although it’s still a preview, OpenAI states this is the first of this series in ChatGPT and on its API, with more to come.

The company says these models have been training to “spend more time thinking through problems before they respond, much like a person would. Through training, they learn to refine their thinking process, try different strategies, and recognize their mistakes.”