Toggle light / dark theme

The interview explores the fundamental premises of Analytic Idealism. Dr. Bernardo Kastrup, known for developing this philosophical system, discusses the nature of consciousness, life, God, and AI with Natalia Vorontsova.
All questions are based on input from our audience, and you can find below all previous interviews referenced during the conversation.

Prof. Bernard Carr.
• Cosmologist Prof. Bernard Carr On Con…
Dr. Bernardo Kastrup & Prof. Bernard Carr.
• What happens to consciousness when cl…
Prof. Julia Mossbridge.
• The Science of Precognition | Dr. Jul…
Dr. Federico Faggin.
• Interview with idealist physicist and…
• Groundbreaking Consciousness Theory B…
Prof. Marjorie Woollacott.
• New Evidence for Out-of-Body Experien…

00:00:00 Interview intro.
00:02:21 Is the fundamental nature of reality really mental?
00:07:38 Mind at Large vs. our individual minds.
00:10:01 What is the purpose of Life in general and our individual lives?
00:17:35 Does the brain generate consciousness or vice versa? Mind-matter relationship.
00:21:06 What is matter according to Analytic Idealism.
00:27:00 The role of evolution.
00:40:30 Does objective reality exist?
00:42:08 Does the Divine exist? God versus Universal Consciousness.
00:49:04 Pantheism versus panentheism: the nature of reality.
00:55:40 What is consciousness? Consciousness with big C and small c.
01:02:20 Anomalous phenomena in the context of Analytic Idealism.
01:05:59 Birth & death in the absence of time & space. Is spacetime fundamental?
01:10:34 Can love, justice or virtue exist if there is no free will? What is free will?
01:17:25 Why is Analytic Idealism considered to be a non-dual philosophy?
01:19:26 Under what conditions AI can become conscious? Blessing or threat?
01:29:33 Science and the world at large if & when Analytic Idealism becomes the mainstream paradigm.

GPUs are widely recognized for their efficiency in handling high-performance computing workloads, such as those found in artificial intelligence and scientific simulations. These processors are designed to execute thousands of threads simultaneously, with hardware support for features like register file access optimization, memory coalescing, and warp-based scheduling. Their structure allows them to support extensive data parallelism and achieve high throughput on complex computational tasks increasingly prevalent across diverse scientific and engineering domains.

A major challenge in academic research involving GPU microarchitectures is the dependence on outdated architecture models. Many studies still use the Tesla-based pipeline as their baseline, which was released more than fifteen years ago. Since then, GPU architectures have evolved significantly, including introducing sub-core components, new control bits for compiler-hardware coordination, and enhanced cache mechanisms. Continuing to simulate modern workloads on obsolete architectures misguides performance evaluations and hinders innovation in architecture-aware software design.

Some simulators have tried to keep pace with these architectural changes. Tools like GPGPU-Sim and Accel-sim are commonly used in academia. Still, their updated versions lack fidelity in modeling key aspects of modern architectures such as Ampere or Turing. These tools often fail to accurately represent instruction fetch mechanisms, register file cache behaviors, and the coordination between compiler control bits and hardware components. A simulator that fails to represent such features can result in gross errors in estimated cycle counts and execution bottlenecks.

The Model Context Protocol (MCP) is an open standard (open-sourced by Anthropic) that defines a unified way to connect AI assistants (LLMs) with external data sources and tools. Think of MCP as a USB-C port for AI applications – a universal interface that allows any AI assistant to plug into any compatible data source or service. By standardizing how context is provided to AI models, MCP breaks down data silos and enables seamless, context-rich interactions across diverse systems.

In practical terms, MCP enhances an AI assistant’s capabilities by giving it controlled access to up-to-date information and services beyond its built-in knowledge. Instead of operating with a fixed prompt or static training data, an MCP-enabled assistant can fetch real-time data, use private knowledge bases, or perform actions on external tools. This helps overcome limitations like the model’s knowledge cutoff and fixed context window. It is observed that simply “stuffing” all relevant text into an LLM’s prompt can hit context length limits, slow responses, and become costly. MCP’s on-demand retrieval of pertinent information keeps the AI’s context focused and fresh, allowing it to incorporate current data and update or modify external information when permitted.

Another way MCP improves AI integration is by unifying the development pattern. Before MCP, connecting an AI to external data often meant using bespoke integrations or framework-specific plugins. This fragmented approach forced developers to re-implement the same tool multiple times for different AI systems. MCP eliminates this redundancy by providing one standardized protocol. An MCP-compliant server (tool integration) can work with any MCP-compliant client (AI application). In short, MCP lets you “write once, use anywhere” when adding new data sources or capabilities to AI assistants. It brings consistent discovery and usage of tools and improved security. All these benefits make MCP a powerful foundation for building more capable and extensible AI assistant applications.

ABSTRACT. Seismic waves carry rich information about earthquake sources and the Earth’s medium. However, the process of extracting earthquake source parameters from seismic waves using traditional methods is complex and time consuming. In this study, we present a deep‐learning‐based method for automatic determination of earthquake source parameters. Considering the principle of calculating source parameters, the input of the deep neural network (SourceNet) includes not only the seismic waveform, but also the amplitude, epicenter distance, and station information. The utilization of multimodal data significantly improves the accuracy of determining earthquake source parameters. The test results using the real seismic data in the Sichuan–Yunnan region show that the earthquake source parameters obtained by SourceNet are in good agreement with the manual results and have higher computational efficiency. We apply the trained SourceNet to the seismic activities in the Changning area and further verify the reliability of the method by comparing our estimates of stress drops with those reported in previous studies of this area. The average time for SourceNet to calculate the source parameters of an earthquake is less than 0.1 s, which can be used for real‐time automatic determination of source parameters.

The world of robotics is undergoing a significant transformation, driven by rapid advancements in physical AI. This evolution is accelerating the time to market for new robotic solutions, enhancing confidence in their safety capabilities, and contributing to the powering of physical AI in factories and warehouses.

Announced at GTC, Newton is an open-source, extensible physics engine developed by NVIDIA, Google DeepMind, and Disney Research to advance robot learning and development.

NVIDIA Cosmos launched as a world foundation model (WFM) platform under an open model license to accelerate physical AI development of autonomous machines such as autonomous vehicles and robots.

A breakthrough in imaging technology promises to transform our understanding of the inner workings of living cells, and provide insights into a wide range of diseases.

The study, recently published in the journal Nature Communications, unveils an innovative approach that combines super-resolution imaging with and to reveal and dynamics. It was led by researchers from Peking University, Ningbo Eastern Institute of Technology and the University of Technology Sydney.

“It’s like taking an airplane over a city at night and watching all the live interactions,” said UTS Distinguished Professor Dayong Jin. “This cutting-edge will open new doors in the quest to understand the intricate world within our cells.”

In the paper accompanying the launch of R1, DeepSeek explained how it took advantage of techniques such as synthetic data generation, distillation, and machine-driven reinforcement learning to produce a model that exceeded the current state-of-the-art. Each of these approaches can be explained another way as harnessing the capabilities of an existing AI model to assist in the training of a more advanced version.

DeepSeek is far from alone in using these AI techniques to advance AI. Mark Zuckerberg predicts that the mid-level engineers at https://fortune.com/company/facebook/” class=””>Meta may soon be replaced by AI counterparts, and that Llama 3 (his company’s LLM) “helps us experiment and iterate faster, building capabilities we want to refine and expand in Llama 4.” https://fortune.com/company/nvidia/” class=””>Nvidia CEO Jensen Huang has spoken at length about creating virtual environments in which AI systems supervise the training of robotic systems: “We can create multiple different multiverses, allowing robots to learn in parallel, possibly learning in 100,000 different ways at the same time.”

This isn’t quite yet the singularity, when intelligent machines autonomously self-replicate, but it is something new and potentially profound. Even amidst such dizzying progress in AI models, though, it’s not uncommon to hear some observers talk about the potential slowing of what’s called the “scaling laws”—the observed principles that AI models increase in performance in direct relationship to the quantity of data, power, and compute applied to them. The release from DeepSeek, and several subsequent announcements from other companies, suggests that arguments of the scaling laws’ demise may be greatly exaggerated. In fact, innovations in AI development are leading to entirely new vectors for scaling—all enabled by AI itself. Progress isn’t slowing down, it’s speeding up—thanks to AI.