{"id":136579,"date":"2022-03-09T02:43:31","date_gmt":"2022-03-09T10:43:31","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2022\/03\/biological-anchors-a-trick-that-might-or-might-not-work"},"modified":"2022-03-09T02:43:31","modified_gmt":"2022-03-09T10:43:31","slug":"biological-anchors-a-trick-that-might-or-might-not-work","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2022\/03\/biological-anchors-a-trick-that-might-or-might-not-work","title":{"rendered":"Biological Anchors: A Trick That Might Or Might Not Work"},"content":{"rendered":"<p style=\"padding-right: 20px\"><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/biological-anchors-a-trick-that-might-or-might-not-work2.jpg\"><\/a><\/p>\n<p>I\u2019ve been trying to review and summarize Eliezer Yudkowksy\u2019s recent dialogues on AI safety. Previously in sequence:<a href=\"https:\/\/astralcodexten.substack.com\/p\/practically-a-book-review-yudkowsky\" rel=\"\"> Yudkowsky Contra Ngo On Agents<\/a>. Now we\u2019re up to Yudkowsky contra Cotra on biological anchors, but before we get there we need to figure out what Cotra\u2019s talking about and what\u2019s going on.<\/p>\n<p>The<a href=\"https:\/\/www.openphilanthropy.org\/\" rel=\"\"> Open Philanthropy Project<\/a> (\u201cOpen Phil\u201d) is a big effective altruist foundation interested in funding AI safety. It\u2019s got $20 billion, probably the majority of money in the field, so its decisions matter a lot and it\u2019s very invested in getting things right. In 2020, it asked senior researcher Ajeya Cotra to produce <strong><a href=\"https:\/\/drive.google.com\/drive\/u\/1\/folders\/15ArhEPZSTYU8f012bs6ehPS6-xmhtBPP\" rel=\"\">a report on when human-level AI would arrive.<\/a> <\/strong>It says the resulting document is \u201cinformal\u201d \u2014 but it\u2019s 169 pages long and likely to affect millions of dollars in funding, which some might describe as making it <em>kind<\/em> of formal. The report finds a 10% chance of \u201ctransformative AI\u201d by 2031, a 50% chance by 2052, and an almost 80% chance by 2100.<\/p>\n<p>Eliezer rejects their methodology and expects AI earlier (he doesn\u2019t offer many numbers, but<a href=\"https:\/\/www.econlib.org\/archives\/2017\/01\/my_end-of-the-w.html\" rel=\"\"> here<\/a> he gives Bryan Caplan 50\u201350 odds on 2030, albeit<a href=\"https:\/\/www.econlib.org\/archives\/2017\/01\/my_end-of-the-w.html#comment-166919\" rel=\"\"> not totally seriously<\/a>). He made the case in his own very long essay, <strong><a href=\"https:\/\/www.lesswrong.com\/posts\/ax695frGJEzGxFBK4\/biology-inspired-agi-timelines-the-trick-that-never-works\" rel=\"\">Biology-Inspired AGI Timelines: The Trick That Never Works<\/a><\/strong>, sparking a bunch of arguments and counterarguments and even more long essays.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>I\u2019ve been trying to review and summarize Eliezer Yudkowksy\u2019s recent dialogues on AI safety. Previously in sequence: Yudkowsky Contra Ngo On Agents. Now we\u2019re up to Yudkowsky contra Cotra on biological anchors, but before we get there we need to figure out what Cotra\u2019s talking about and what\u2019s going on. The Open Philanthropy Project (\u201cOpen [\u2026]<\/p>\n","protected":false},"author":556,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[3,6],"tags":[],"class_list":["post-136579","post","type-post","status-publish","format-standard","hentry","category-biological","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/136579","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/556"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=136579"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/136579\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=136579"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=136579"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=136579"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}