Toggle light / dark theme

Parkinson’s doesn’t just affect movement and the brain—it may also impact the heart, according to new research from the University of Surrey. Scientists from Surrey’s School of Veterinary Medicine suggest that targeting a key protein outside of the brain could help manage Parkinson’s-related heart issues.

In a study published in Experimental Physiology, Surrey researchers studied mouse models and found a harmful buildup of the alpha-synuclein protein, which is associated with Parkinson’s disease, in a nerve cluster near the heart (the stellate ganglia). These nerves are part of the autonomic nervous system, which controls heart rate and rhythm.

Researchers found that 27% of neurons in the nerve cluster contained aggregated alpha-synuclein, forming similar toxic clumps seen in the brains of Parkinson’s patients. This finding suggests that Parkinson’s could disrupt heart function, not just movement and .

The interview explores the fundamental premises of Analytic Idealism. Dr. Bernardo Kastrup, known for developing this philosophical system, discusses the nature of consciousness, life, God, and AI with Natalia Vorontsova.
All questions are based on input from our audience, and you can find below all previous interviews referenced during the conversation.

Prof. Bernard Carr.
• Cosmologist Prof. Bernard Carr On Con…
Dr. Bernardo Kastrup & Prof. Bernard Carr.
• What happens to consciousness when cl…
Prof. Julia Mossbridge.
• The Science of Precognition | Dr. Jul…
Dr. Federico Faggin.
• Interview with idealist physicist and…
• Groundbreaking Consciousness Theory B…
Prof. Marjorie Woollacott.
• New Evidence for Out-of-Body Experien…

00:00:00 Interview intro.
00:02:21 Is the fundamental nature of reality really mental?
00:07:38 Mind at Large vs. our individual minds.
00:10:01 What is the purpose of Life in general and our individual lives?
00:17:35 Does the brain generate consciousness or vice versa? Mind-matter relationship.
00:21:06 What is matter according to Analytic Idealism.
00:27:00 The role of evolution.
00:40:30 Does objective reality exist?
00:42:08 Does the Divine exist? God versus Universal Consciousness.
00:49:04 Pantheism versus panentheism: the nature of reality.
00:55:40 What is consciousness? Consciousness with big C and small c.
01:02:20 Anomalous phenomena in the context of Analytic Idealism.
01:05:59 Birth & death in the absence of time & space. Is spacetime fundamental?
01:10:34 Can love, justice or virtue exist if there is no free will? What is free will?
01:17:25 Why is Analytic Idealism considered to be a non-dual philosophy?
01:19:26 Under what conditions AI can become conscious? Blessing or threat?
01:29:33 Science and the world at large if & when Analytic Idealism becomes the mainstream paradigm.

The process of catalysis—in which a material speeds up a chemical reaction—is crucial to the production of many of the chemicals used in our everyday lives. But even though these catalytic processes are widespread, researchers often lack a clear understanding of exactly how they work.

A new analysis by researchers at MIT has shown that an important industrial synthesis process, the production of vinyl acetate, requires a catalyst to take two different forms, which cycle back and forth from one to the other as the chemical process unfolds.

Previously, it had been thought that only one of the two forms was needed. The new findings are published today in the journal Science, in a paper by MIT graduate students Deiaa Harraz and Kunal Lodaya, Bryan Tang, Ph.D., and MIT professor of chemistry and chemical engineering Yogesh Surendranath.

GPUs are widely recognized for their efficiency in handling high-performance computing workloads, such as those found in artificial intelligence and scientific simulations. These processors are designed to execute thousands of threads simultaneously, with hardware support for features like register file access optimization, memory coalescing, and warp-based scheduling. Their structure allows them to support extensive data parallelism and achieve high throughput on complex computational tasks increasingly prevalent across diverse scientific and engineering domains.

A major challenge in academic research involving GPU microarchitectures is the dependence on outdated architecture models. Many studies still use the Tesla-based pipeline as their baseline, which was released more than fifteen years ago. Since then, GPU architectures have evolved significantly, including introducing sub-core components, new control bits for compiler-hardware coordination, and enhanced cache mechanisms. Continuing to simulate modern workloads on obsolete architectures misguides performance evaluations and hinders innovation in architecture-aware software design.

Some simulators have tried to keep pace with these architectural changes. Tools like GPGPU-Sim and Accel-sim are commonly used in academia. Still, their updated versions lack fidelity in modeling key aspects of modern architectures such as Ampere or Turing. These tools often fail to accurately represent instruction fetch mechanisms, register file cache behaviors, and the coordination between compiler control bits and hardware components. A simulator that fails to represent such features can result in gross errors in estimated cycle counts and execution bottlenecks.

The Model Context Protocol (MCP) is an open standard (open-sourced by Anthropic) that defines a unified way to connect AI assistants (LLMs) with external data sources and tools. Think of MCP as a USB-C port for AI applications – a universal interface that allows any AI assistant to plug into any compatible data source or service. By standardizing how context is provided to AI models, MCP breaks down data silos and enables seamless, context-rich interactions across diverse systems.

In practical terms, MCP enhances an AI assistant’s capabilities by giving it controlled access to up-to-date information and services beyond its built-in knowledge. Instead of operating with a fixed prompt or static training data, an MCP-enabled assistant can fetch real-time data, use private knowledge bases, or perform actions on external tools. This helps overcome limitations like the model’s knowledge cutoff and fixed context window. It is observed that simply “stuffing” all relevant text into an LLM’s prompt can hit context length limits, slow responses, and become costly. MCP’s on-demand retrieval of pertinent information keeps the AI’s context focused and fresh, allowing it to incorporate current data and update or modify external information when permitted.

Another way MCP improves AI integration is by unifying the development pattern. Before MCP, connecting an AI to external data often meant using bespoke integrations or framework-specific plugins. This fragmented approach forced developers to re-implement the same tool multiple times for different AI systems. MCP eliminates this redundancy by providing one standardized protocol. An MCP-compliant server (tool integration) can work with any MCP-compliant client (AI application). In short, MCP lets you “write once, use anywhere” when adding new data sources or capabilities to AI assistants. It brings consistent discovery and usage of tools and improved security. All these benefits make MCP a powerful foundation for building more capable and extensible AI assistant applications.