{"id":231776,"date":"2026-02-21T01:22:54","date_gmt":"2026-02-21T07:22:54","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2026\/02\/by-2050-we-could-get-10000-years-of-technological-progress"},"modified":"2026-02-21T01:22:54","modified_gmt":"2026-02-21T07:22:54","slug":"by-2050-we-could-get-10000-years-of-technological-progress","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2026\/02\/by-2050-we-could-get-10000-years-of-technological-progress","title":{"rendered":"By 2050 we could get \u201c10,000 years of technological progress\u201d"},"content":{"rendered":"<p><\/p>\n<p><iframe style=\"display: block; margin: 0 auto; width: 100%; aspect-ratio: 4\/3; object-fit: contain;\" src=\"https:\/\/www.youtube.com\/embed\/Z19UEZHJzAg?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope;\n   picture-in-picture\" allowfullscreen><\/iframe><\/p>\n<p>Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they\u2019ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almost satirical. But is it actually a bad plan? Today\u2019s guest, Ajeya Cotra, recently placed 3rd out of 413 participants forecasting AI developments and is among the most thoughtful and respected commentators on where the technology is going.<\/p>\n<p>She thinks there\u2019s a meaningful chance we\u2019ll see as much change in the next 23 years as humanity faced in the last 10,000, thanks to the arrival of artificial general intelligence. Ajeya doesn\u2019t reach this conclusion lightly: she\u2019s had a ring-side seat to the growth of all the major AI companies for 10 years \u2014 first as a researcher and grantmaker for technical AI safety at Coefficient Giving (formerly known as Open Philanthropy), and now as a member of technical staff at METR.<\/p>\n<p>So host Rob Wiblin asked her: is this plan to use AI to save us from AI a reasonable one?<\/p>\n<p>Ajeya agrees that humanity has repeatedly used technologies that create new problems to help solve those problems. After all:<br \/> \u2022 Cars enabled carjackings and drive-by shootings, but also faster police pursuits.<br \/> \u2022 Microbiology enabled bioweapons, but also faster vaccine development.<br \/> \u2022 The internet allowed lies to disseminate faster, but had exactly the same impact for fact checks.<\/p>\n<p>But she also thinks this will be a much harder case. In her view, the window between AI automating AI research and the arrival of uncontrollably powerful superintelligence could be quite brief \u2014 perhaps a year or less. In that narrow window, we\u2019d need to redirect enormous amounts of AI labour away from making AI smarter and towards alignment research, biodefence, cyberdefence, adapting our political structures, and improving our collective decision-making.<\/p>\n<p>The plan might fail just because the idea is flawed at conception: it does sound a bit crazy to use an AI you don\u2019t trust to make sure that same AI benefits humanity.<\/p>\n<div class=\"more-link-wrapper\"> <a class=\"more-link\" href=\"https:\/\/lifeboat.com\/blog\/2026\/02\/by-2050-we-could-get-10000-years-of-technological-progress\">Continue reading \u201cBy 2050 we could get \u201c10,000 years of technological progress\u201d\u201d | &gt;<\/a><\/div>\n","protected":false},"excerpt":{"rendered":"<p>Every major AI company has the same safety plan: when AI gets crazy powerful and really dangerous, they\u2019ll use the AI itself to figure out how to make AI safe and beneficial. It sounds circular, almost satirical. But is it actually a bad plan? Today\u2019s guest, Ajeya Cotra, recently placed 3rd out of 413 participants [\u2026]<\/p>\n","protected":false},"author":662,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11,418,9,6],"tags":[],"class_list":["post-231776","post","type-post","status-publish","format-standard","hentry","category-biotech-medical","category-internet","category-military","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/231776","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/662"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=231776"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/231776\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=231776"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=231776"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=231776"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}