{"id":155315,"date":"2023-01-13T02:24:34","date_gmt":"2023-01-13T08:24:34","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2023\/01\/victoria-krakovna-agi-ruin-sharp-left-turn-paradigms-of-ai-alignment"},"modified":"2023-01-13T02:24:34","modified_gmt":"2023-01-13T08:24:34","slug":"victoria-krakovna-agi-ruin-sharp-left-turn-paradigms-of-ai-alignment","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2023\/01\/victoria-krakovna-agi-ruin-sharp-left-turn-paradigms-of-ai-alignment","title":{"rendered":"Victoria Krakovna\u2013AGI Ruin, Sharp Left Turn, Paradigms of AI Alignment"},"content":{"rendered":"<p><\/p>\n<p><iframe style=\"display: block; margin: 0 auto; width: 100%; aspect-ratio: 4\/3; object-fit: contain;\" src=\"https:\/\/www.youtube.com\/embed\/ZpwSNiLV-nw?feature=oembed\" frameborder=\"0\" allow=\"accelerometer; autoplay; encrypted-media; gyroscope;\n   picture-in-picture\" allowfullscreen><\/iframe><\/p>\n<p>Victoria Krakovna is a Research Scientist at DeepMind working on AGI safety and a co-founder of the Future of Life Institute, a non-profit organization working to mitigate technological risks to humanity and increase the chances of a positive future. In this interview we discuss three of her recent LW posts, namely DeepMind Alignment Team Opinions On AGI Ruin Arguments, Refining The Sharp Left Turn Threat Model and Paradigms of AI Alignment.<\/p>\n<p>Transcript &amp; Audio: <a href=\"https:\/\/theinsideview.ai\/victoria\">https:\/\/theinsideview.ai\/victoria<\/a>.<\/p>\n<p>Host: <a href=\"https:\/\/twitter.com\/MichaelTrazzi\">https:\/\/twitter.com\/MichaelTrazzi<\/a>.<br \/> Victoria: <a href=\"https:\/\/twitter.com\/vkrakovna\">https:\/\/twitter.com\/vkrakovna<\/a>.<\/p>\n<p>DeepMind Alignment Team On AGI Ruin arguments: <a href=\"https:\/\/www.lesswrong.com\/posts\/qJgz2YapqpFEDTLKn\/deepmind-alignment-team-opinions-on-agi-ruin-arguments\">https:\/\/www.lesswrong.com\/posts\/qJgz2YapqpFEDTLKn\/deepmind-a\u2026-arguments<\/a>.<br \/> Refining the Sharp Left Turn Threat Model: <a href=\"https:\/\/www.lesswrong.com\/posts\/usKXS5jGDzjwqv3FJ\/refining-the-sharp-left-turn-threat-model-part-1-claims-and\">https:\/\/www.lesswrong.com\/posts\/usKXS5jGDzjwqv3FJ\/refining-t\u2026claims-and<\/a>.<br \/> Paradigms of AI Alignment: <a href=\"https:\/\/www.lesswrong.com\/posts\/JC7aJZjt2WvxxffGz\/paradigms-of-ai-alignment-components-and-enablers\">https:\/\/www.lesswrong.com\/posts\/JC7aJZjt2WvxxffGz\/paradigms-\u2026d-enablers<\/a>.<\/p>\n<p>This conversation presents Victoria\u2019s personal views and does not represent the views of DeepMind as a whole.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Victoria Krakovna is a Research Scientist at DeepMind working on AGI safety and a co-founder of the Future of Life Institute, a non-profit organization working to mitigate technological risks to humanity and increase the chances of a positive future. In this interview we discuss three of her recent LW posts, namely DeepMind Alignment Team Opinions [\u2026]<\/p>\n","protected":false},"author":556,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[20,6],"tags":[],"class_list":["post-155315","post","type-post","status-publish","format-standard","hentry","category-futurism","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/155315","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/556"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=155315"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/155315\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=155315"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=155315"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=155315"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}