Toggle light / dark theme

The Why Is a Discipline: Goodhart’s Law and AI

A reader asked me a question this week that I have been thinking about ever since.

She did not ask whether AI could malfunction. She did not ask whether bad actors could misuse it. She asked something sharper:

Can a system produce bad outcomes systematically, even when intent is good, and nothing is broken?

The answer is yes. And it is the most dangerous category of bad outcome, because nobody is at fault and nothing is broken.

We have all the evidence we need. Amazon ran into it. YouTube ran into it. Hospitals are running into it now. AI labs are about to run into it at a planetary scale. And almost nobody is talking about why.

A 1975 economic principle explains it cleanly. A reader’s question forced me to refine an argument I have been making for years.

New essay: [ https://www.singularityweblog.com/goodharts-law-ai/](https://www.singularityweblog.com/goodharts-law-ai/)

Follow-up to last week’s The AI Paradox. Cure or poison?


Why a good why is not enough. How Goodhart’s Law explains the most dangerous category of AI failure: systems working as designed, with bad outcomes.

Leave a Comment

Lifeboat Foundation respects your privacy! Your email address will not be published.

/* */