Toggle light / dark theme

AI has created a sea change in society; now, it is setting its sights on the sea itself. Researchers at Osaka Metropolitan University have developed a machine learning-powered fluid simulation model that significantly reduces computation time without compromising accuracy.

Their fast and precise technique opens up potential applications in offshore power generation, ship design and real-time ocean monitoring. The study was published in Applied Ocean Research.

Accurately predicting fluid behavior is crucial for industries relying on wave and tidal energy, as well as for the design of maritime structures and vessels.

Cervical artery dissection is a tear in an artery in the neck that provides blood flow to the brain. Such a tear can result in blood clots that cause stroke. A new study has found almost a five-fold increase in the number of U.S. hospitalizations for cervical artery dissection over a 15-year period.

The study is published online in Neurology.

A dissection of the artery wall is most often caused by trauma due to but can also occur with smaller injuries. Heavy lifting has also been shown to cause dissection in some people.

The interview explores the fundamental premises of Analytic Idealism. Dr. Bernardo Kastrup, known for developing this philosophical system, discusses the nature of consciousness, life, God, and AI with Natalia Vorontsova.
All questions are based on input from our audience, and you can find below all previous interviews referenced during the conversation.

Prof. Bernard Carr.
• Cosmologist Prof. Bernard Carr On Con…
Dr. Bernardo Kastrup & Prof. Bernard Carr.
• What happens to consciousness when cl…
Prof. Julia Mossbridge.
• The Science of Precognition | Dr. Jul…
Dr. Federico Faggin.
• Interview with idealist physicist and…
• Groundbreaking Consciousness Theory B…
Prof. Marjorie Woollacott.
• New Evidence for Out-of-Body Experien…

00:00:00 Interview intro.
00:02:21 Is the fundamental nature of reality really mental?
00:07:38 Mind at Large vs. our individual minds.
00:10:01 What is the purpose of Life in general and our individual lives?
00:17:35 Does the brain generate consciousness or vice versa? Mind-matter relationship.
00:21:06 What is matter according to Analytic Idealism.
00:27:00 The role of evolution.
00:40:30 Does objective reality exist?
00:42:08 Does the Divine exist? God versus Universal Consciousness.
00:49:04 Pantheism versus panentheism: the nature of reality.
00:55:40 What is consciousness? Consciousness with big C and small c.
01:02:20 Anomalous phenomena in the context of Analytic Idealism.
01:05:59 Birth & death in the absence of time & space. Is spacetime fundamental?
01:10:34 Can love, justice or virtue exist if there is no free will? What is free will?
01:17:25 Why is Analytic Idealism considered to be a non-dual philosophy?
01:19:26 Under what conditions AI can become conscious? Blessing or threat?
01:29:33 Science and the world at large if & when Analytic Idealism becomes the mainstream paradigm.

GPUs are widely recognized for their efficiency in handling high-performance computing workloads, such as those found in artificial intelligence and scientific simulations. These processors are designed to execute thousands of threads simultaneously, with hardware support for features like register file access optimization, memory coalescing, and warp-based scheduling. Their structure allows them to support extensive data parallelism and achieve high throughput on complex computational tasks increasingly prevalent across diverse scientific and engineering domains.

A major challenge in academic research involving GPU microarchitectures is the dependence on outdated architecture models. Many studies still use the Tesla-based pipeline as their baseline, which was released more than fifteen years ago. Since then, GPU architectures have evolved significantly, including introducing sub-core components, new control bits for compiler-hardware coordination, and enhanced cache mechanisms. Continuing to simulate modern workloads on obsolete architectures misguides performance evaluations and hinders innovation in architecture-aware software design.

Some simulators have tried to keep pace with these architectural changes. Tools like GPGPU-Sim and Accel-sim are commonly used in academia. Still, their updated versions lack fidelity in modeling key aspects of modern architectures such as Ampere or Turing. These tools often fail to accurately represent instruction fetch mechanisms, register file cache behaviors, and the coordination between compiler control bits and hardware components. A simulator that fails to represent such features can result in gross errors in estimated cycle counts and execution bottlenecks.

The Model Context Protocol (MCP) is an open standard (open-sourced by Anthropic) that defines a unified way to connect AI assistants (LLMs) with external data sources and tools. Think of MCP as a USB-C port for AI applications – a universal interface that allows any AI assistant to plug into any compatible data source or service. By standardizing how context is provided to AI models, MCP breaks down data silos and enables seamless, context-rich interactions across diverse systems.

In practical terms, MCP enhances an AI assistant’s capabilities by giving it controlled access to up-to-date information and services beyond its built-in knowledge. Instead of operating with a fixed prompt or static training data, an MCP-enabled assistant can fetch real-time data, use private knowledge bases, or perform actions on external tools. This helps overcome limitations like the model’s knowledge cutoff and fixed context window. It is observed that simply “stuffing” all relevant text into an LLM’s prompt can hit context length limits, slow responses, and become costly. MCP’s on-demand retrieval of pertinent information keeps the AI’s context focused and fresh, allowing it to incorporate current data and update or modify external information when permitted.

Another way MCP improves AI integration is by unifying the development pattern. Before MCP, connecting an AI to external data often meant using bespoke integrations or framework-specific plugins. This fragmented approach forced developers to re-implement the same tool multiple times for different AI systems. MCP eliminates this redundancy by providing one standardized protocol. An MCP-compliant server (tool integration) can work with any MCP-compliant client (AI application). In short, MCP lets you “write once, use anywhere” when adding new data sources or capabilities to AI assistants. It brings consistent discovery and usage of tools and improved security. All these benefits make MCP a powerful foundation for building more capable and extensible AI assistant applications.

ABSTRACT. Seismic waves carry rich information about earthquake sources and the Earth’s medium. However, the process of extracting earthquake source parameters from seismic waves using traditional methods is complex and time consuming. In this study, we present a deep‐learning‐based method for automatic determination of earthquake source parameters. Considering the principle of calculating source parameters, the input of the deep neural network (SourceNet) includes not only the seismic waveform, but also the amplitude, epicenter distance, and station information. The utilization of multimodal data significantly improves the accuracy of determining earthquake source parameters. The test results using the real seismic data in the Sichuan–Yunnan region show that the earthquake source parameters obtained by SourceNet are in good agreement with the manual results and have higher computational efficiency. We apply the trained SourceNet to the seismic activities in the Changning area and further verify the reliability of the method by comparing our estimates of stress drops with those reported in previous studies of this area. The average time for SourceNet to calculate the source parameters of an earthquake is less than 0.1 s, which can be used for real‐time automatic determination of source parameters.

The world of robotics is undergoing a significant transformation, driven by rapid advancements in physical AI. This evolution is accelerating the time to market for new robotic solutions, enhancing confidence in their safety capabilities, and contributing to the powering of physical AI in factories and warehouses.

Announced at GTC, Newton is an open-source, extensible physics engine developed by NVIDIA, Google DeepMind, and Disney Research to advance robot learning and development.

NVIDIA Cosmos launched as a world foundation model (WFM) platform under an open model license to accelerate physical AI development of autonomous machines such as autonomous vehicles and robots.