Toggle light / dark theme

Hybrid Cosmopsychism: A Bold New Answer to the Mystery of Consciousness

How Exactly does Panpsychism Help Explain Consciousness?
In this episode, we explore a provocative new theory in the philosophy of mind—hybrid cosmopsy-chism. This hybrid form of panpsychism claims that conscious experience is rooted in the universe itself and distributed through emergent subjects like humans. By combining strong and weak emergence, this view promises to overcome the limitations of both physicalism and dualism, offering a radical yet elegant solution to the hard problem of consciousness.

Disclaimer:

In this video, we use Google’s NotebookLM to assist in the analysis and understanding of complex doc-uments. NotebookLM is a research and writing tool that allows us to generate summaries directly from uploaded documents. The podcast like audio overview you will hear is generated by Google’s AI based on the content of the published paper on the topic.

Please note that the interpretations and summaries generated by NotebookLM are automated and may not capture every detail or nuance. They are intended to aid in understanding but should not be consid-ered a substitute for professional advice or a legal interpretation of the documents.

What’s Wrong with Panpsychism? | Joscha Bach

Patreon: https://bit.ly/3v8OhY7

Main Channel: https://www.youtube.com/@robinsonerhardt.

Full Episode: https://youtu.be/XcNlv9gp20o.

Robinson’s Podcast #219 — Consciousness, Artificial Intelligence, and the Threat of AI Apocalypse.

Joscha Bach is a computer scientist and artificial intelligence researcher currently working with Liquid AI. He has previously done research at Harvard, MIT, Intel, and the AI Foundation. In this episode, Joscha and Robinson discuss the nature of consciousness—both in humans and synthetic—various theories of consciousness like panpsychism, physicalism, dualism, and Roger Penrose’s, the distinction between intelligence and artificial intelligence, the next developments of ChatGPT and other LLMs, OpenAI, and whether advances in AI will spell the end of humankind.

Joscha’s X: ⁠https://twitter.com/Plinz

Lab-in-the-loop framework enables rapid evolution of complex multi-mutant proteins

The search space for protein engineering grows exponentially with complexity. A protein of just 100 amino acids has 20100 possible variants—more combinations than atoms in the observable universe. Traditional engineering methods might test hundreds of variants but limit exploration to narrow regions of the sequence space. Recent machine learning approaches enable broader searches through computational screening. However, these approaches still require tens of thousands of measurements, or 5–10 iterative rounds.

With the advent of these foundational protein models, the bottleneck for protein engineering swings back to the lab. For a single protein engineering campaign, researchers can only efficiently build and test hundreds of variants. What is the best way to choose those hundreds to most effectively uncover an evolved protein with substantially increased function? To address this problem, researchers have developed MULTI-evolve, a framework for efficient protein evolution that applies machine learning models trained on datasets of ~200 variants focused specifically on pairs of function-enhancing mutations.

Published in Science, this work represents Arc Institute’s first lab-in-the-loop framework for biological design, where computational prediction and experimental design are tightly integrated from the outset, reflecting a broader investment in AI-guided research.

PromptSpy is the first known Android malware to use generative AI at runtime

Researchers have discovered the first known Android malware to use generative AI in its execution flow, using Google’s Gemini model to adapt its persistence across different devices.

In a report today, ESET researcher Lukas Stefanko explains how a new Android malware family named “PromptSpy” is abusing the Google Gemini AI model to help it achieve persistence on infected devices.

“In February 2026, we uncovered two versions of a previously unknown Android malware family,” explains ESET.

Artificial intelligence in medicine: How it works, how it fails

Artificial intelligence (AI) is transforming healthcare, with large language models emerging as important tools for clinical practice, education, and research. To use it safely and effectively, healthcare professionals need to understand how it works, and how it fails. Using practical clinical examples, the authors explain the subset of AI called large language models, highlighting their capabilities and their limitations.

Key Points

  • AI is trained on vast amounts of data, which can itself be biased, leading to biased results.

AI tool debuts with better genomic predictions and explanations

Artificial intelligence has taken the world by storm. In biology, AI tools called deep neural networks (DNNs) have proven invaluable for predicting the results of genomic experiments. Their usefulness has these tools poised to set the stage for efficient, AI-guided research and potentially lifesaving discoveries—if scientists can work out the kinks. The findings are published in the journal npj Artificial Intelligence.

“Right now, there are a lot of different AI tools where you’ll give an input, and they’ll give an output, but we don’t have a good way of assessing the certainty, or how confident they are, in their answers,” explains Cold Spring Harbor Laboratory (CSHL) Associate Professor Peter Koo. “They all come out in the same format, whether you’re using a large language model or DNNs used in genomics and other fields of biology.”

It’s one of the greatest challenges today’s researchers face. Now, Koo, former CSHL postdoc Jessica Zhou, and graduate student Kaeli Rizzo have devised a potential solution—DEGU (Distilling Ensembles for Genomic Uncertainty-aware models). DNNs trained using DEGU are more efficient and more accurate in their predictions than those learning via standard methods.

/* */