{"id":235982,"date":"2026-04-27T10:05:37","date_gmt":"2026-04-27T15:05:37","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2026\/04\/the-why-is-a-discipline-goodharts-law-and-ai"},"modified":"2026-04-27T10:05:37","modified_gmt":"2026-04-27T15:05:37","slug":"the-why-is-a-discipline-goodharts-law-and-ai","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2026\/04\/the-why-is-a-discipline-goodharts-law-and-ai","title":{"rendered":"The Why Is a Discipline: Goodhart\u2019s Law and AI"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/the-why-is-a-discipline-goodharts-law-and-ai2.jpg\"><\/a><\/p>\n<p>A reader asked me a question this week that I have been thinking about ever since.<\/p>\n<p>She did not ask whether AI could malfunction. She did not ask whether bad actors could misuse it. She asked something sharper:<\/p>\n<p>Can a system produce bad outcomes systematically, even when intent is good, and nothing is broken?<\/p>\n<p>The answer is yes. And it is the most dangerous category of bad outcome, because nobody is at fault and nothing is broken.<\/p>\n<p>We have all the evidence we need. Amazon ran into it. YouTube ran into it. Hospitals are running into it now. AI labs are about to run into it at a planetary scale. And almost nobody is talking about why.<\/p>\n<p>A 1975 economic principle explains it cleanly. A reader\u2019s question forced me to refine an argument I have been making for years.<\/p>\n<p>New essay: [ <a href=\"https:\/\/www.singularityweblog.com\/goodharts-law-ai\/\">https:\/\/www.singularityweblog.com\/goodharts-law-ai\/<\/a>](<a href=\"https:\/\/www.singularityweblog.com\/goodharts-law-ai\/)\">https:\/\/www.singularityweblog.com\/goodharts-law-ai\/)<\/a><\/p>\n<div class=\"more-link-wrapper\"> <a class=\"more-link\" href=\"https:\/\/lifeboat.com\/blog\/2026\/04\/the-why-is-a-discipline-goodharts-law-and-ai\">Continue reading \u201cThe Why Is a Discipline: Goodhart\u2019s Law and AI\u201d | &gt;<\/a><\/div>\n","protected":false},"excerpt":{"rendered":"<p>A reader asked me a question this week that I have been thinking about ever since. She did not ask whether AI could malfunction. She did not ask whether bad actors could misuse it. She asked something sharper: Can a system produce bad outcomes systematically, even when intent is good, and nothing is broken? The [\u2026]<\/p>\n","protected":false},"author":737,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11,39,6],"tags":[],"class_list":["post-235982","post","type-post","status-publish","format-standard","hentry","category-biotech-medical","category-economics","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/235982","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/737"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=235982"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/235982\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=235982"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=235982"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=235982"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}