Toggle light / dark theme

Top AI Scientist Unifies Wolfram, Leibniz, & Consciousness | William Hahn

In today’s episode, William Hahn explores how Wolfram’s universal computation and Leibniz’s layered consciousness might converge in modern AI, potentially yielding a new evolutionary step in machine self-awareness.

As a listener of TOE you can get a special 20% off discount to The Economist and all it has to offer! Visit https://www.economist.com/toe.

Rubin Gruber Sandbox (referenced by Will): https://www.fau.edu/sandbox.

➡️Join My New Substack (Personal Writings): https://curtjaimungal.substack.com.
➡️Listen on Spotify: https://tinyurl.com/SpotifyTOE
➡️Become a YouTube Member (Early Access Videos):
/ @theoriesofeverything.

Links Mentioned:
William Hahn’s first appearance on TOE: • \.

CERN Physicists Use AI To Understand the ‘God Particle’

To identify signs of particles like the Higgs boson, CERN researchers work with mountains of data generated by LHC collisions.

Hunting for evidence of an object whose behavior is predicted by existing theories is one thing. But having successfully observed the elusive boson, identifying new and unexpected particles and interactions is an entirely different matter.

To speed up their analysis, physicists feed data from the billions of collisions that occur in LHC experiments into machine learning algorithms. These models are then trained to identify anomalous patterns.

Targeted polymersomes boost methotrexate efficacy for aggressive choriocarcinoma treatment

Researchers develop targeted polymersomes to enhance methotrexate delivery, offering a promising new approach for treating aggressive choriocarcinoma.

Study: ENT-1-Targeted Polymersomes to Enhance the Efficacy of Methotrexate in Choriocarcinoma Treatment. Image Credit: Shutterstock AI Generator / Shutterstock.com.

In a recent study published in Small Science, researchers develop targeted polymersomes loaded with methotrexate for the treatment of gestational choriocarcinoma, a rare and aggressive malignancy originating from the placenta.

Elon Musk-Led Group Makes $97.4 Billion Bid for Control of OpenAI

In today’s AI news, a consortium of investors led by Elon Musk is offering $97.4 billion to buy the nonprofit that controls OpenAI. The unsolicited offer adds a complication to Altman’s carefully laid plans for OpenAI’s future, including converting it to a for-profit company and spending up to $500 billion on AI infrastructure through a JV called Stargate.

In other advancements, all eyes were on French President Emmanuel Macron Sunday at the end of the first day of the AI Action Summit in Paris after he announced a €109 billion investment package. “For me, this summit is not just the announcement of a lot of investment in France. It’s a wake-up call for a European strategy,” he said.

And, Current AI, a “public interest” initiative focused on fostering and steering development of artificial intelligence in societally beneficial directions, was announced at the French AI Action summit on Monday. It’s kicking off with an initial $400 million in pledges from backers and a plan to pull in $2.5 billion more over the next five years.

Then, ZDNET contributor Jack Wallen reports his local AI of choice is the open-source Ollama. He recently wrote a piece on how to make using this local LLM easier with the help of a browser extension, which he uses on Linux. But on MacOS devices, Jack turns to an easy-to-use, free app called Msty.

In videos, At the AI Action Summit in Paris, Yann LeCun underscored a fundamental shift in artificial intelligence—one that moves beyond the brute-force approach of large language models in his presentation, “The Next AI Revolution”. The future of AI hinges on *world models*—structured, adaptive representations that can infer, reason, and plan.

And, DeepSeek is not a threat to OpenAI says, Sridhar Ramaswamy, CEO Snowflake, the $60BN public company with $3.5BN in revenue growing 30% per year. Sridhar joined Snowflake following his company, Neeva, being acquired by them for $150M. Mr. Ramaswamy spent 15 years growing Google’s AdWords from $1.5B to over $100B.

Then, former Google CEO Eric Schmidt with Alliant founder Craig Mundie talk with David Rubenstein about the promise, and potential peril, of a new frontier in artificial intelligence — and their collaboration with the late Henry Kissinger on his final book, Genesis. Recorded January 26, 2025 at The 92nd Street Y, in New York City, New York.

Lab-Grown “Mini-brains” Perform Non-Linear Computation, Eat Neurotransmitters, & Go To Space

❗FlexiSpot Amazon Black Friday Deal Up to 70% OFF❗
Free Orders Nov.24 & Nov.27! 🎁
Use code COMHARDESK for an additional 5% OFF for my model: https://amzn.to/3ZoZT4u.
US site: https://amzn.to/3t8r9I
Canada Site: https://amzn.to/3sYHtLH
#blackfriday #amazon #standingdesk #flexispotus.

Human brain organoids (“mini-brains”) are being grown in labs around the world. They’re being fed neurotransmitters, competing with AI to solve non-linear equations, and going to space to study the effects of microgravity. This video reviews three preprints, preliminary reports of new scientific studies. (My AI voice caught a cold this week.)

Support the channel: https://www.patreon.com/ihmcurious.

Preprints:

- Brain Organoid Computing for Artificial Intelligence (Cai et al.) https://www.biorxiv.org/content/10.1101/2023.02.28.530502v1.full.

- Modulation of neuronal activity in cortical organoids with bioelectronic delivery of ions and neurotransmitters (Park et al.) https://www.biorxiv.org/content/10.1101/2023.06.10.544416v1.full.

Brain-inspired neural networks reveal insights into biological basis of relational learning

Humans and certain animals appear to have an innate capacity to learn relationships between different objects or events in the world. This ability, known as “relational learning,” is widely regarded as critical for cognition and intelligence, as learned relationships are thought to allow humans and animals to navigate new situations.

Researchers at ML Collective in San Francisco and Columbia University have conducted a study aimed at understanding the biological basis of relational learning by using a particular type of brain-inspired artificial neural network. Their work, published in Nature Neuroscience, sheds new light on the processes in the brain that could underpin relational learning in humans and other organisms.

“While I was visiting Columbia University, I met my co-author Kenneth Kay and we talked about his research,” Thomas Miconi, co-author of the paper, told Medical Xpress.

Gate-controllable two-dimensional transition metal dichalcogenides for spintronic memory

The rapid advancement of technologies like artificial intelligence (AI) and the Internet of Things (IoT) has heightened the demand for high-speed, energy-efficient memory devices. Traditional memory technologies often struggle to balance performance with power consumption.

Spintronic devices, which leverage electron spin rather than charge, present a promising alternative. In particular, TMD materials are attractive due to their unique electronic properties and potential for miniaturization.

Researchers have proposed the development of gate-controllable TMD spin valves to address these challenges. By integrating a gate mechanism, these devices can modulate spin transport properties, enabling precise control over memory operations. This approach aims to enhance tunneling magnetoresistance (TMR) ratios, improve spin current densities, and reduce during read and write processes. The study is published in the Journal of Alloys and Compounds.

/* */