Toggle light / dark theme

ICLR 2025

Shaden Alshammari, John Hershey, Axel Feldmann, William T. Freeman, Mark Hamilton.

MIT, Microsoft, Google.

(https://mhamilton.net/icon.

[ https://openreview.net/forum?id=WfaQrKCr4X](https://openreview.net/forum?id=WfaQrKCr4X

[ https://github.com/mhamilton723/STEGO](https://github.com/mhamilton723/STEGO

“We introduce a single equation that unifies 20 machine learning methods into a periodic table. We use this framework to make a state-of-the-art unsupervised image classifier.”

A unified recipe for smarter AI: one equation to learn them all.

In the fast-evolving world of machine learning, researchers have created a wide variety of “loss functions”—mathematical tools that help AI models learn from data. Each loss function is designed to solve a specific type of problem, like recognizing patterns, grouping similar items, or making predictions. But with so many of them out there, the landscape can feel like a confusing toolbox with too many specialized tools.

This research changes the game by introducing a single, elegant equation that ties together many of these approaches using ideas from information theory—a field that studies how information is measured and transmitted. Think of it as a master key that unlocks the common principles behind seemingly unrelated learning methods, from clustering and dimensionality reduction to contrastive and supervised learning.

At the heart of the idea is a concept called KL divergence, which measures how different two sets of information are. The researchers show that many popular AI techniques can be seen as trying to minimize the difference between what a model is supposed to learn (the “supervisory signal”) and what it actually learns (its internal “representation”). This perspective reveals a hidden geometric structure that connects diverse methods under one unified theory.

But this isn’t just theoretical. By applying this framework—called I-Con—the team built image classifiers that learned without human labels and still beat the previous best systems on a major benchmark (ImageNet-1K) by 8%. Even more impressively, the same theory can help make AI models fairer by reducing bias in how they learn representations of the world.

In short, this work offers a powerful new lens for understanding, designing, and improving how AI learns—bringing us one step closer to AI that’s not just smart, but also unified and principled.


Introducing a periodic table of machine learning algorithms.

Leave a Comment

If you are already a member, you can use this form to update your payment info.

Lifeboat Foundation respects your privacy! Your email address will not be published.