{"id":132140,"date":"2021-12-10T23:23:22","date_gmt":"2021-12-11T07:23:22","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2021\/12\/community-of-ethical-hackers-needed-to-prevent-ais-looming-crisis-of-trust-experts-argue"},"modified":"2021-12-10T23:23:22","modified_gmt":"2021-12-11T07:23:22","slug":"community-of-ethical-hackers-needed-to-prevent-ais-looming-crisis-of-trust-experts-argue","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2021\/12\/community-of-ethical-hackers-needed-to-prevent-ais-looming-crisis-of-trust-experts-argue","title":{"rendered":"Community of ethical hackers needed to prevent AI\u2019s looming \u2018crisis of trust\u2019, experts argue"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/community-of-ethical-hackers-needed-to-prevent-ais-looming-crisis-of-trust-experts-argue3.jpg\"><\/a><\/p>\n<p>The Artificial Intelligence industry should create a global community of hackers and \u201cthreat modelers\u201d dedicated to stress-testing the harm potential of new AI products in order to earn the trust of governments and the public before it\u2019s too late.<\/p>\n<p>This is one of the recommendations made by an international team of risk and machine-learning experts, led by researchers at the University of Cambridge\u2019s Center for the Study of Existential Risk (CSER), who have authored a new \u201ccall to action\u201d published today in the journal Science.<\/p>\n<p>They say that companies building intelligent technologies should harness techniques such as \u201cred team\u201d hacking, audit trails and \u201cbias bounties\u201d\u2014paying out rewards for revealing ethical flaws\u2014to prove their integrity before releasing AI for use on the wider public.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The Artificial Intelligence industry should create a global community of hackers and \u201cthreat modelers\u201d dedicated to stress-testing the harm potential of new AI products in order to earn the trust of governments and the public before it\u2019s too late. This is one of the recommendations made by an international team of risk and machine-learning experts, [\u2026]<\/p>\n","protected":false},"author":427,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[34,12,6],"tags":[],"class_list":["post-132140","post","type-post","status-publish","format-standard","hentry","category-cybercrime-malcode","category-existential-risks","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/132140","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/427"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=132140"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/132140\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=132140"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=132140"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=132140"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}