Toggle light / dark theme

Brain organoids can be trained to solve a goal-directed task

This research is the first rigorous academic demonstration of goal-directed learning in lab-grown brain organoids, and lays the foundation for adaptive organoid computation—exploring the capacity of lab-grown brain organoids to learn and solve tasks.

Using organoids derived from mouse stem cells and an electrophysiology system developed by industry partners Maxwell Biosciences, the researchers use electrical simulation to send and receive information to and from neurons. By using stronger or weaker signals, they communicate to the organoid the angle of the pole, which exists in a virtual environment, as it falls in one direction or the other. As this happens, the researchers observe as the organoid sends back signals of how to apply force to balance the pole, and they apply this force to the virtual pole.

For their pole-balancing experiments, the researchers observe as the organoid controls the pole until it drops, which is called an episode. Then, the pole is reset and a new episode begins. In essence, the organoid plays a video game in which the goal is to balance the pole upright for as long as possible.

The researchers observe the organoid’s progress in five-episode increments. If the organoid keeps the pole upright for longer on average in the past five episodes as compared to the past 20, it receives no training signal since it has been improving. If it does not improve the average time it keeps the pole upright, it receives a training signal.

Training feedback is not given to the organoid while it is balancing the pole—only at the end of an episode. An AI algorithm called reinforcement learning is used to select which neurons within the organoid get the training signal.

The results of this study prove that the reinforcement learning algorithm can guide the brain organoids toward improved performance at the cart-pole task—meaning organoids can learn to balance the pole for longer periods of time.

The researchers adopted a rigorous framework for success to make sure they were observing true improvement, and not just random success, including a threshold for the minimum time an organoid needs to balance the pole to “win” the game.

‘It seemed to defy the laws of physics’: The everlasting ‘memory crystals’ that could slash data centre emissions

In the face of rising emissions from data centres, researchers are turning to micro-explosions in glass, and using DNA to solve big data’s big problem.

Mathematicians make a breakthrough on 2,000-year-old problem of curves

From the article:

“A Rule for Every Curve”

That’s where the new proof comes in. Its authors present a formula that can be applied to any curve in the mathematical universe, whatever its degree. It doesn’t say precisely how many rational points that curve has, but it gives an upper limit on what that number can be.

Previous formulas of this kind either didn’t apply to all curves or depended on the specific equation used to define them. The new formula is something mathematicians have hoped for since Faltings’s proof, a “uniform” statement that applies to all curves without depending on the coefficients in their equations. “This one statement gives us a broad sweep of understanding,” Mazur says.

It depends on only two things. The first is the degree of the polynomial that defines the curve—the higher the degree is, the weaker the statement becomes. The second thing the formula depends on is called the “Jacobian variety,” a special surface that can be constructed from any curve. Jacobian varieties are interesting in their own right, and the formula offers a tantalizing path for studying them as well.”


Since ancient Greece, researchers have tried to isolate special rational points on curves. Now they have the first ever formula that applies uniformly to all curves.

Quantum algorithm beats classical tools on complement sampling tasks

Quantum computers—devices that process information using quantum mechanical effects—have long been expected to outperform classical systems on certain tasks. Over the past few decades, researchers have worked to rigorously demonstrate such advantages, ideally in ways that are provable, verifiable and experimentally realizable.

A team of researchers working at Quantinuum in the United Kingdom and QuSoft in the Netherlands has now developed a quantum algorithm that solves a specific sampling task—known as complement sampling—dramatically more efficiently than any classical algorithm. Their paper, published in Physical Review Letters, establishes a provable and verifiable quantum advantage in sample complexity: the number of samples required to solve a problem.

“We stumbled upon the core result of this work by chance while working on a different project,” Harry Buhrman, co-author of the paper, told Phys.org. “We had a set of items and two quantum states: one formed from half of the items, the other formed from the remaining half. Even though the two states are fundamentally distinct, we showed that a quantum computer may find it hard to tell which one it is given. Surprisingly, however, we then realized that transforming one state into the other is always easy, because a simple operation can swap between them.”

Quantum computers go high-dimensional with a four-state photon gate

The collaboration of TU Wien with research groups in China has resulted in a crucial building block for a new kind of quantum computer: The realization of a novel type of quantum logic gate makes it possible to carry out quantum computations on pairs of photons that are each in four different quantum states, or combinations thereof. The advancement is an important milestone for optical quantum computers. The study has now been published in Nature Photonics.

The basic idea of quantum computers is simple: While a classical computer only works with the values “0” and “1,” quantum physics allows for arbitrary combinations of these states. In a certain sense, a quantum bit (“qubit”) can be in the states 0 and 1 simultaneously. This makes it possible to develop algorithms that can solve some problems much faster than a comparable classical computer.

However, such superpositions can in principle involve more than two states. Depending on what degree of freedom one considers, a quantum system such as a photon may not just have two different settings—two different outcomes of a potential measurement—but many. In this case, one refers to the system as a “qudit” rather than a “qubit.”

GOOD LUCK, HAVE FUN, DON’T DIE — Welcome To The Perfect Prison

Gore Verbinski’s Good Luck, Have Fun, Dont Die hits like a nasty mirror held up at the worst possible angle. On paper, the setup sounds almost playful: a “Man From the Future” drops into a diner in Los Angeles and has to recruit the exact combination of disgruntled strangers for a one-night mission to stop a rogue AI. But the horror isn’t metal skeletons and laser fire. It’s the idea that the end of humanity doesn’t arrive with an explosion. It arrives with an upgrade. A perfectly tuned stream of algorithmic entertainment that doesn’t merely distract people—it replaces them. A manufactured paradise so frictionless, so gratifying, so chemically rewarding, that the messy, strenuous, inconvenient act of being human starts to feel obsolete.

#goodluckhavefundontdie #samrockwell #ai #algorithm.

Check out my playlists on film here — Film Explored — • Film Explored.

Check out my playlist on Alien here — • New to Aliens? Start Here.

Check out my playlist on Predator here — • New to Predator? Start Here.

AI in Pathology Fails Without Pathologists

🧠 AI in pathology cannot succeed without pathologists. As computational pathology advances, clinical expertise remains the critical link between algorithms and real-world impact.

In this discussion, Diana Montezuma, Pathologist and Head of R&D at IMP Diagnostics, explains why pathologist involvement is essential to building AI tools that are usable, clinically relevant, and truly valuable in practice.

👉 Read the discussion:


Pathologists play a key role in AI development for pathology – providing the expertise needed to bridge data and clinical application. To discuss this role and its importance in the development of computational pathology tools, we connected with Diana Montezuma, Pathologist and Head of the R&D Unit at IMP Diagnostics.

From your perspective, what is the most important contribution that diagnosticians bring to AI and algorithm development?

Pathologists bring essential clinical expertise and practical insight to any computational pathology project. Without their involvement, such initiatives risk becoming disconnected from real-world practice and ultimately failing to deliver meaningful clinical value.

Machine learning algorithm fully reconstructs LHC particle collisions

The CMS Collaboration has shown, for the first time, that machine learning can be used to fully reconstruct particle collisions at the LHC. This new approach can reconstruct collisions more quickly and precisely than traditional methods, helping physicists better understand LHC data. The paper has been submitted to the European Physical Journal C and is currently available on the arXiv preprint server.

Each proton–proton collision at the LHC sprays out a complex pattern of particles that must be carefully reconstructed to allow physicists to study what really happened. For more than a decade, CMS has used a particle-flow (PF) algorithm, which combines information from the experiment’s different detectors, to identify each particle produced in a collision. Although this method works remarkably well, it relies on a long chain of hand-crafted rules designed by physicists.

The new CMS machine-learning-based particle-flow (MLPF) algorithm approaches the task fundamentally differently, replacing much of the rigid hand-crafted logic with a single model trained directly on simulated collisions. Instead of being told how to reconstruct particles, the algorithm learns how particles look in the detectors, like how humans learn to recognize faces without memorizing explicit rules.

The Truth About Merging With AI

Will humans one day merge with artificial intelligence? Futurist Ray Kurzweil predicts a coming “singularity” where humans upload their minds into digital systems, expanding intelligence and potentially achieving immortality. But critics argue that consciousness, creativity, love, and spiritual awareness cannot be reduced to algorithms. This discussion explores brain-computer interfaces, quantum mechanics and the mind, the Ship of Theseus identity paradox, and whether a digital copy of your brain would actually be you. Is AI-driven immortality possible—or does it misunderstand what it means to be human?

Every year the Center sponsors COSM an exclusive national summit on the converging technologies remaking the world as we know it. Visit COSM.TECH (https://cosm.tech/) for information on COSM 2025, November 19–21 at the beautiful Hilton Scottsdale Resort and Spas in Scottsdale, AZ. For more information. Registration will launch mid-July.

The mission of the Walter Bradley Center for Natural and Artificial Intelligence at Discovery Institute is to explore the benefits as well as the challenges raised by artificial intelligence (AI) in light of the enduring truth of human exceptionalism. People know at a fundamental level that they are not machines. But faulty thinking can cause people to assent to views that in their heart of hearts they know to be untrue. The Bradley Center seeks to help individuals—and our society at large—to realize that we are not machines while at the same time helping to put machines (especially computers and AI) in proper perspective.

Be sure to subscribe to the Center for Natural and Artificial Intelligence.
on Youtube: / @discoverycnai.

Follow Walter Bradley Center for Natural and Artificial Intelligence on.
X: / cnaintelligence, @cnaintelligence.
Facebook: / bradleycenterdi.

Visit other Youtube channels connected to the Discovery Institute:

/* */