{"id":193003,"date":"2024-07-15T19:22:56","date_gmt":"2024-07-16T00:22:56","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2024\/07\/the-real-long-term-dangers-of-ai"},"modified":"2024-07-15T19:22:56","modified_gmt":"2024-07-16T00:22:56","slug":"the-real-long-term-dangers-of-ai","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2024\/07\/the-real-long-term-dangers-of-ai","title":{"rendered":"The real long-term dangers of AI"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/the-real-long-term-dangers-of-ai.jpg\"><\/a><\/p>\n<p>Read &amp; tell me what you think \ud83d\ude42<\/p>\n<hr>\n<p>\n<em>There is a rift between near and long-term perspectives on AI safety \u2013 one that has stirred controversy. Longtermists argue that we need to prioritise the well-being of people far into the future, perhaps at the expense of people alive today. But their critics have accused the Longtermists of obsessing on Terminator-style scenarios in concert with Big Tech to distract regulators from more pressing issues like data privacy. In this essay, Mark Bailey and Susan Schneider argue that<\/em> <em>we shouldn\u2019t be fighting about the Terminator, we should be focusing on the harm to the mind itself \u2013 to our very freedom to think.<\/em><\/p>\n<p>There has been a growing debate between near and long-term perspectives on AI safety \u2013 one that has stirred controversy. \u201cLongtermists\u201d have been accused of being <a href=\"https:\/\/www.theguardian.com\/technology\/2023\/oct\/31\/uk-ai-summit-tech-regulation\" target=\"_blank\">co-opted by Big Tech<\/a> and fixating on science fiction-like Terminator-style scenarios to distract regulators from the real, more near-term, issues, such as <a href=\"https:\/\/www.nist.gov\/news-events\/news\/2022\/03\/theres-more-ai-bias-biased-data-nist-report-highlights\" target=\"_blank\">algorithmic bias and data privacy<\/a>.<\/p>\n<p><a href=\"https:\/\/www.centreforeffectivealtruism.org\/longtermism\" target=\"_blank\">Longtermism<\/a> is an ethical theory that requires us to consider the effects of today\u2019s decisions on all of humanity\u2019s potential futures. It can lead to extremes, as it concludes that one should sacrifice the <em>present<\/em> wellbeing of humanity for the good of humanity\u2019s <em>potential<\/em> futures. <a href=\"https:\/\/www.bbc.com\/future\/article\/20220805-what-is-longtermism-and-why-does-it-matter\" target=\"_blank\">Many Longtermists<\/a> believe humans will ultimately lose control of AI, as it will become \u201csuperintelligent\u201d, outthinking humans in every domain \u2013 social acumen, mathematical abilities, strategic thinking, and more.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Read &amp; tell me what you think \ud83d\ude42 There is a rift between near and long-term perspectives on AI safety \u2013 one that has stirred controversy. Longtermists argue that we need to prioritise the well-being of people far into the future, perhaps at the expense of people alive today. But their critics have accused the [\u2026]<\/p>\n","protected":false},"author":661,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[41,2229,6],"tags":[],"class_list":["post-193003","post","type-post","status-publish","format-standard","hentry","category-information-science","category-mathematics","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/193003","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/661"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=193003"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/193003\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=193003"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=193003"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=193003"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}