It’s obvious when a dog has been poorly trained. It doesn’t respond properly to commands. It pushes boundaries and behaves unpredictably. The same is true with a poorly trained artificial intelligence (AI) model. Only with AI, it’s not always easy to identify what went wrong with the training.
Research scientists globally are working with a variety of AI models that have been trained on experimental and theoretical data. The goal: to predict a material’s properties before taking the time and expense to create and test it. They are using AI to design better medicines and industrial chemicals in a fraction of the time it takes for experimental trial and error.
But how can they trust the answers that AI models provide? It’s not just an academic question. Millions of investment dollars can ride on whether AI model predictions are reliable.
New research shows that the adult brain can generate new neurons that integrate into key motor circuits. The findings demonstrate that stimulating natural brain processes may help repair damaged neural networks in Huntington’s and other diseases.
“Our research shows that we can encourage the brain’s own cells to grow new neurons that join in naturally with the circuits controlling movement,” said a senior author of the study, which appears in the journal Cell Reports. “This discovery offers a potential new way to restore brain function and slow the progression of these diseases.”
It was long believed that the adult brain could not generate new neurons. However, it is now understood that niches in the brain contain reservoirs of progenitor cells capable of producing new neurons. While these cells actively produce neurons during early development, they switch to producing support cells called glia shortly after birth. One of the areas of the brain where these cells congregate is the ventricular zone, which is adjacent to the striatum, a region of the brain devastated by Huntington’s disease.
Human cyborgs are individuals who integrate advanced technology into their bodies, enhancing their physical or cognitive abilities. This fusion of man and machine blurs the line between science fiction and reality, raising questions about the future of humanity, ethics, and the limits of human potential. From bionic limbs to brain-computer interfaces, cyborg technology is rapidly evolving, pushing us closer to a world where humans and machines become one.
ChatGPT and alike often amaze us with the accuracy of their answers, but unfortunately, they also repeatedly give us cause for doubt. The main issue with powerful AI response engines (artificial intelligence) is that they provide us with perfect answers and obvious nonsense with the same ease. One of the major challenges lies in how the large language models (LLMs) underlying AI deal with uncertainty.
Until now, it has been very difficult to assess whether LLMs designed for text processing and generation base their responses on a solid foundation of data or whether they are operating on uncertain ground.
Researchers at the Institute for Machine Learning at the Department of Computer Science at ETH Zurich have now developed a method that can be used to specifically reduce the uncertainty of AI. The work is published on the arXiv preprint server.
This Deep Dive AI podcast discusses my book The Physics of Time: D-Theory of Time & Temporal Mechanics, an insightful exploration into one of the most profound mysteries of existence: the nature of time. As part of the Science and Philosophy of Information series, this book presents a radical reinterpretation of time grounded in modern physics and digital philosophy. It questions whether time is a fundamental aspect of reality or an emergent property of consciousness and information processing. Drawing on quantum physics, cosmology, and consciousness studies, this work invites readers (and listeners) to reimagine time not as a linear, absolute entity, but as a dynamic, editable dimension intertwined with the fabric of reality itself. It challenges traditional views, blending scientific inquiry with metaphysical insights, aimed at both the curious mind and the philosophical seeker.
In this episode, we dive deep into The Physics of Time: D-Theory of Time & Temporal Mechanics by futurist-philosopher Alex M. Vikoulov. Explore the profound questions at the intersection of consciousness, quantum and digital physics, and the true nature of time. Is time fundamental or emergent? Can we travel through it? What is Digital Presentism?
The Physics of Time: D-Theory of Time & Temporal Mechanics by Alex M. Vikoulov is an insightful exploration into one of the most profound mysteries of existence: the nature of time. As part of the Science and Philosophy of Information series, this book presents a radical reinterpretation of time grounded in modern physics and digital philosophy. It questions whether time is a fundamental aspect of reality or an emergent property of consciousness and information processing.
The book introduces the D-Theory of Time, or Digital Presentism, which suggests that all moments exist as discrete, informational states, and that our perception of time’s flow is a mental construct. Vikoulov explores theoretical models of time travel, the feasibility of manipulating time, and the concept of the Temporal Singularity, a proposed point where temporal mechanics may reach a transformative threshold.
Artificial intelligence (AI) shows tremendous promise for analyzing vast medical imaging datasets and identifying patterns that may be missed by human observers. AI-assisted interpretation of brain scans may help improve care for children with brain tumors called gliomas, which are typically treatable but vary in risk of recurrence.
Investigators from Mass General Brigham and collaborators at Boston Children’s Hospital and Dana-Farber/Boston Children’s Cancer and Blood Disorders Center trained deep learning algorithms to analyze sequential, post-treatment brain scans and flag patients at risk of cancer recurrence.
In a new Nature Communicationsstudy, researchers have developed an in-memory ferroelectric differentiator capable of performing calculations directly in the memory without requiring a separate processor.
The proposed differentiator promises energy efficiency, especially for edge devices like smartphones, autonomous vehicles, and security cameras.
Traditional approaches to tasks like image processing and motion detection involve multi-step energy-intensive processes. This begins with recording data, which is transmitted to a memory unit, which further transmits the data to a microcontroller unit to perform differential operations.
Building GPT-4 took a lot of people. Now, OpenAI says it could rebuild it with as few as five people, all because of what it learned from its latest model, GPT-4.5.