Toggle light / dark theme

In this episode, Peter answers the hardest questions about AI, Longevity, and our future at an event in El Salvador (Padres y Hijos).

Recorded on February 2025
Views are my own thoughts; not Financial, Medical, or Legal Advice.

Chapters.

00:00 — Navigating Confusion in Leadership and Purpose.
02:00 — The Evolution of Work and Purpose.
03:50 — AI’s Role in Information Credibility.
07:17 — Sustainability and Technology’s Impact on Nature.
09:26 — Building a Future with AI and Longevity.
11:40 — The Economics of Longevity and Accessibility.
15:15 — Reimagining Education for the Future.
19:23 — Overcoming Human Obstacles to Progress.

I send weekly emails with the latest insights and trends on today’s and tomorrow’s exponential technologies. Stay ahead of the curve, and sign up now: https://www.diamandis.com/subscribe.

Connect with Peter:

Imagine having the computational power of a full-fledged data center sitting on your desk for just $3,000. It might sound too good to be true, but Nvidia’s Project DIGITS is making it a reality. Designed in collaboration with MediaTek, this compact machine is pushing the boundaries of what’s possible with AI development and high-performance computing, offering impressive power in a surprisingly small form factor.

Silicon Valley’s earliest stage companies are getting a major boost from artificial intelligence. Startup accelerator Y Combinator — known for backing Airbnb, Dropbox and Stripe — this week held its annual demo day in San Francisco, where founders pitched their startups to an auditorium of potential venture capital investors.

S not just the number one or two companies — the whole batch is growing 10% week on week, said Tan, who is also a Y Combinator alum. That App developers can now offload or automate more repetitive tasks, and they can generate new code using large language models. Tan called it vibe coding, a term for letting models take the wheel and generate software. In some cases, AI can code entire apps. The ability for AI to subsidize an otherwise heavy workload has allowed these companies to build with fewer people. For about a quarter of the current YC startups, 95% of their code was written by AI, Tan said.

T need a team of 50 or 100 engineers, said Tan, adding that companies are reaching as much as $10 million in revenue with teams of less than 10 people. You don [ https://open.substack.com/pub/remunerationlabs/p/y-combinato…Share=true](https://open.substack.com/pub/remunerationlabs/p/y-combinato…Share=true)


About 80% of the YC companies that presented this week were AI focused, with a handful of robotics and semiconductor startups.

Over the past decades, roboticists have introduced a wide range of systems with distinct body structures and varying capabilities. As the number of developed robots continuously grows, being able to easily learn about these many systems, their unique characteristics, differences and performance on specific tasks could prove highly valuable.

Researchers at Technical University of Munich (TUM) recently created the “Tree of Robots,” a new encyclopedia that could make learning about existing and comparing them significantly easier. Their robot encyclopedia, introduced in a paper published in Nature Machine Intelligence, categorizes robots based on their performance fitness on various tasks.

“The aspiration for that can understand their environment as we humans do, and execute tasks independently, has existed for ages,” Robin Jeanne Kirschner, first author of the paper, told Tech Xplore.

Chinese internet search giant Baidu released a new artificial intelligence reasoning model Sunday and made its AI chatbot services free to consumers as ferocious competition grips the sector.

Technology companies in China have been scrambling to release improved AI platforms since start-up DeepSeek shocked its rivals with its and highly cost-efficient model in January.

In a post on WeChat, Baidu announced the launch of its latest X1 reasoning model—which the company claims performs similarly to DeepSeek’s but for lower cost—and a new foundation model, Ernie 4.5.

Artificial intelligence can transform medicine in a myriad of ways, including its promise to act as a trusted diagnostic aide to busy clinicians.

Over the past two years, proprietary AI models, also known as closed-source models, have excelled at solving hard-to-crack medical cases that require complex clinical reasoning. Notably, these closed-source AI models have outperformed ones, so-called because their source code is publicly available and can be tweaked and modified by anyone.

Has open-source AI caught up?

In this video, Dr. Ardavan (Ahmad) Borzou will discuss a rising technology in constructing bio-computers for AI tasks, namely Brainoware, which is made of brain organoids interfaced by electronic arrays.

Need help for your data science or math modeling project?
https://compu-flair.com/solution/

🚀 Join the CompuFlair Community! 🚀
📈 Sign up on our website to access exclusive Data Science Roadmap pages — a step-by-step guide to mastering the essential skills for a successful career.
💪As a member, you’ll receive emails on expert-engineered ChatGPT prompts to boost your data science tasks, be notified of our private problem-solving sessions, and get early access to news and updates.
👉 https://compu-flair.com/user/register.

Comprehensive Python Checklist (machine learning and more advanced libraries will be covered on a different page):
https://compu-flair.com/blogs/program… — Introduction 02:16 — Von Neumann Bottleneck 03:54 — What is brain organoid 05:09 — Brainoware: reservoir computing for AI 06:29 — Computing properties of Brainoware: Nonlinearity & Short-Memory 09:27 — Speech recognition by Brainoware 12:25 — Predicting chaotic motion by Brainoware 13:39 — Summary of Brainoware research 14:35 — Can brain organoids surpass the human brain? 15:51 — Will humans evolve to a body-less stage in their evolution? 16:30 — What is the mathematical model of Brainoware?

00:00 — Introduction.
02:16 — Von Neumann Bottleneck.
03:54 — What is brain organoid.
05:09 — Brainoware: reservoir computing for AI
06:29 — Computing properties of Brainoware: Nonlinearity & Short-Memory.
09:27 — Speech recognition by Brainoware.
12:25 — Predicting chaotic motion by Brainoware.
13:39 — Summary of Brainoware research.
14:35 — Can brain organoids surpass the human brain?
15:51 — Will humans evolve to a body-less stage in their evolution?
16:30 — What is the mathematical model of Brainoware?

For over a century, galvanic vestibular stimulation (GVS) has been used as a way to stimulate the inner ear nerves by passing a small amount of current.

We use GVS in a two player escape the room style VR game set in a dark virtual world. The VR player is remote controlled like a robot by a non-VR player with GVS to alter the VR player’s walking trajectory. We also use GVS to induce the physical sensations of virtual motion and mitigate motion sickness in VR.

Brain hacking has been a futurist fascination for decades. Turns out, we may be able to make it a reality as research explores the impact of GVS on everything from tactile sensation to memory.

Misha graduated in June 2018 from the MIT Media Lab where she worked in the Fluid Interfaces group with Prof Pattie Maes. Misha works in the area of human-computer interaction (HCI), specifically related to virtual, augmented and mixed reality. The goal of her work is to create systems that use the entire body for input and output and automatically adapt to each user’s unique state and context. Misha calls her concept perceptual engineering, i.e., immersive systems that alter the user’s perception (or more specifically the input signals to their perception) and influence or manipulate it in subtle ways. For example, they modify a user’s sense of balance or orientation, manipulate their visual attention and more, all without the user’s explicit awareness, and in order to assist or guide their interactive experience in an effortless way.

The systems Misha builds use the entire body for input and output, i.e., they can use movement, like walking, or a physiological signal, like breathing as input, and can output signals that actuate the user’s vestibular system with electrical pulses, causing the individual to move or turn involuntarily. HCI up to now has relied upon deliberate, intentional usage, both for input (e.g., touch, voice, typing) and for output (interpreting what the system tells you, shows you, etc.). In contrast, Misha develops techniques and build systems that do not require this deliberate, intentional user interface but are able to use the body as the interface for more implicit and natural interactions.

Misha’s perceptual engineering approach has been shown to increase the user’s sense of presence in VR/MR, provide novel ways to communicate between the user and the digital system using proprioception and other sensory modalities, and serve as a platform to question the boundaries of our sense of agency and trust.