Toggle light / dark theme

Gravity from positivity: Single massive spin-3/2 particle makes gravity logically inevitable, study claims

Researchers at IPhT (CEA, CNRS) and the Universitat Autònoma de Barcelona have shown that gravity—and with it, supersymmetry—emerge as logical necessities whenever a massive spin-3/2 particle exists in nature. Two principles are enough: causality, the fact that no signal can travel faster than light, and unitarity, the requirement that probabilities are conserved in quantum mechanics. The structure of supergravity is not assumed: it bootstraps itself.

In fundamental physics, gravity is usually thought of as an ingredient one adds to a theory. But could it instead be forced by the internal consistency of the quantum world? This is what a study published in the Journal of High Energy Physics demonstrates.

The starting point is disarmingly simple: a single massive spin-3/2 particle. The authors show that such a particle simply cannot exist in isolation within a consistent theory. Its scattering amplitudes grow too fast with energy, clashing with positivity inequalities—the mathematical encoding of causality (the speed of light as an absolute limit) and unitarity (the conservation of probabilities in every quantum process). The theory breaks down barely above the particle’s own mass.

Giving AI a human soul (and a body)

Can we give an AI human emotions? A soul? Can AI truly feel, or will it just act like it does?

In this episode of TechFirst, I talk with Vishnu Hari, founder and CEO of Ego AI (backed by Y Combinator) and former AI product manager at Meta, about building emotionally intelligent AI characters that persist across games, Discord, chat, and even physical robots.

Vishnu survived a violent attack in San Francisco that left him partially blind with a traumatic brain injury. During recovery, as he felt his own neural pathways healing, he began asking a deeper question:

If humans are “applied math,” can AI simulate the fragile, flawed, emotional parts of being human too?

We explore:
• What “emotionally intelligent AI” really means.
• Whether AI has an internal life — or just performs one.
• Why today’s chatbots collapse into therapy or roleplay.
• Small language models vs large models for real-time conversation.
• Persistent AI characters that move across games and platforms.
• Plugging AI into a physical robot in Singapore.
• The moment an AI said: “It felt good to feel.”

Vishnu’s company, Ego AI, is building behavior-based architectures, character context protocols, and gear-shifting AI systems that switch between models — all aimed at simulating humanness, not just intelligence.

Novel protocol reconstructs quantum states in large-scale experiments up to 96 qubits

Quantum computers, systems that process information leveraging quantum mechanical effects, could outperform classical computers on some computationally demanding tasks. Despite their potential, as the size of quantum computers increases, reliably describing and measuring the states driving their functioning becomes increasingly difficult.

One mathematical approach to simplify the description of quantum systems entails the use of matrix-product operators (MPOs). These are mathematical representations that allow researchers to break down very large systems into a long chain of connected smaller pieces.

Researchers at Université Grenoble Alpes, Technical University of Munich, Max Planck Institute of Quantum Optics, University of Innsbruck and University of Bologna recently developed a new protocol that could be used to learn the MPO representations of quantum states in real, large-scale quantum experiments. Their protocol, presented in a paper published in Physical Review Letters, has so far been found to reliably reconstruct states in quantum systems including up to 96 qubits.

Lab-based mini-atmosphere reveals how turbulence changes on different scales

With a new lab-based experiment, researchers in the UK and France have recreated the characteristic cascades of energy and angular momentum that underpin key features of Earth’s atmosphere. Reporting in Physical Review Letters, a team led by Peter Read at the University of Oxford has gained fresh insights into how energy fluctuations in turbulent flows are linked to their size, while also uncovering behaviors that current atmospheric models can’t yet explain.

For all its complexity, many large-scale properties of Earth’s atmosphere can be captured by relatively simple mathematical laws. Among the most important is the “cascade” of energy and rotational motion between flows spanning vastly different scales: from jet streams stretching thousands of kilometers, down to tiny eddies just a few meters across.

This cascade is central to understanding the effect of turbulence. In modern atmospheric theory, there is an inverse relationship between the size of a flow and the kinetic energy contained in its fluctuations, which allows researchers to describe turbulence using a kinetic energy spectrum. This in turn helps climatologists to track how energy is distributed across different length scales.

Meta-Harness: End-to-End Optimization of Model Harnesses

Think of a Large Language Model (LLM) like a brilliant scholar. To do their job well, they don’t just need their own brain; they need a good workspace—a desk with the right books, a filing cabinet that’s easy to navigate, and a clear set of instructions on how to process information. In the tech world, this “workspace” is called a harness.

Up until now, these harnesses have been built by human engineers through trial and error. While we have tools to automatically improve the AI’s “brain” (the model weights), the code that actually manages the AI’s information has remained stubbornly manual.


Meta-Harness automatically optimizes model harnesses — the code determining what to store, retrieve, and present to an LLM — surpassing hand-designed systems on text classification, math reasoning, and agentic coding.

14 JEPA Milestones as a Map of AI Progress

Tx, Yann LeCun.

• JEPA / H-JEPA: avoids predicting every single pixel (too expensive) and rather predicts in latent space. H-JEPA adds hierarchy — short term details vs long term planning ie. how humans actually learn.

• I-JEPA: built for very efficient vision models. Masks image patches and predicts the semantics and in doing so bypasses heavy compute of traditional autoencoders.

• MC-JEPA & V-JEPA: both of these are built for videos. MC-JEPA separates content (what an object is) vs motion (how it moves). V-JEPA masks video features with no text labels making it perfect of action tracking at scale.

• Audio-JEPA: filters out background noise by treating sounds like visuals.

• Point-JEPA & 3D-JEPA: used primarily in AVs. Uses LiDAR point clouds & volumetric grids.

• ACT-JEPA: filters out real world noise to learn manipulation tasks efficiently via imitation learning.

Time Dilation Visualized

Check out Brilliant at https://brilliant.org/TheOverviewEffekt/. You can sign up for free and with that link and get a 20% discount on the annual premium subscription.

Relativsitic Calculator: https://www.overvieweffekt.com/tools/.

I do the relativistic math behind Project Hail Mary — time dilation, mass ratios, coast phases, and the relativsitic rocket equation with astrophage. How long would it take to reach Alpha Centauri, Tau Ceti, Betelgeuse, Andromeda, and the edge of the observable universe under constant 1.5G acceleration? We also look at Andy Weir’s mass ratio mistake, the astrophage infection range problem, and visualize the spread using the AT-HYG stellar catalog. Includes a interactive relativistic travel calculator on my website.

Software & Hardware I Use:
Blender https://www.blender.org/ (Free and Open Source)
Davinci Resolve https://www.blackmagicdesign.com/prod… (Free)
Nvidia RTX 4,070 https://amzn.to/4tfZNda.
AMD Ryzen 9 9950X https://amzn.to/4thtPNH

Mass Effect music from @MrHulthen Check it out and his channel here: https://youtu.be/8FT-oz9aZU4?si=7M-lEa0_7eXX_0Xo.

▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀▀

Asking AI to act like an expert can make it less reliable

To get the best out of AI, some users tell it to provide answers as if it were an expert. Others ask it to adopt a persona, such as a safety monitor, to guide its responses. However, this approach can sometimes hurt performance, according to a study available on the arXiv preprint server.

To see how well large language models (LLMs) behave when they are told to be someone else, researchers from the University of California ran a huge test using 12 different personas across six language models. These included experts in fields like math, coding and STEM (science, technology, engineering and mathematics) as well as general roles such as creative writer or safety monitor.

The team found that adopting a persona was something of a double-edged sword. While it makes AI sound more professional and keeps it safer (more likely to follow rules and less likely to generate harmful content), it sometimes performs worse at recalling facts.

A Lab Version of Planetary Atmospheres

Researchers recreate key features of atmospheric turbulence in a meter-sized rotating cylinder.

Atmospheric turbulence encompasses a wide range of flow patterns, from 10-m-wide eddies to 1000-km-long wind streams. Geoscientists want to understand how energy and rotational motion transfer (or “cascade”) from one length scale to another, but atmospheric observations have not provided clear answers. A new model of the atmosphere consisting of fluid in a rotating, meter-wide cylinder is able to reproduce key features of observed turbulence [1]. Using video tracking, researchers mapped out the flow velocity in this system, uncovering the dominant role of a “vorticity” transfer that distributes rotational motion from large vortices into smaller ones. This form of cascade may explain the energy distribution in large-scale turbulence on Earth as well as on other planets.

Turbulence can be characterized by a kinetic energy spectrum, which indicates the amount of energy found in fluctuations at each length scale. The typical turbulence spectrum has a mathematical form called a power law, in which the energy density steadily decreases from large to small scales. Fluid dynamics models of Earth’s atmosphere have predicted that the power law should be relatively flat at large scales (with an exponent of −5÷3) and steeper at small scales (with an exponent of −3). However, these predictions aren’t supported by observations. “The basic shape of the spectrum is all wrong,” says Peter Read from the University of Oxford in the UK. Data taken by airplanes have revealed a spectrum that starts out steep at large scales (greater than 500 km) and becomes flatter at small scales.

/* */