{"id":178225,"date":"2023-12-13T09:23:36","date_gmt":"2023-12-13T15:23:36","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2023\/12\/ai-networks-are-more-vulnerable-to-malicious-attacks-than-previously-thought"},"modified":"2023-12-13T09:23:36","modified_gmt":"2023-12-13T15:23:36","slug":"ai-networks-are-more-vulnerable-to-malicious-attacks-than-previously-thought","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2023\/12\/ai-networks-are-more-vulnerable-to-malicious-attacks-than-previously-thought","title":{"rendered":"AI Networks are more Vulnerable to Malicious Attacks than previously thought"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/ai-networks-are-more-vulnerable-to-malicious-attacks-than-previously-thought2.jpg\"><\/a><\/p>\n<p>Artificial intelligence tools hold promise for applications ranging from autonomous vehicles to the interpretation of medical images. However, a new study finds these AI tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions.<\/p>\n<p>At issue are so-called \u201cadversarial attacks,\u201d in which someone manipulates the data being fed into an AI system in order to confuse it. For example, someone might know that putting a specific type of sticker at a specific spot on a stop sign could effectively make the stop sign invisible to an AI system. Or a hacker could install code on an X-ray machine that alters the image data in a way that causes an AI system to make inaccurate diagnoses.<\/p>\n<p>\u201cFor the most part, you can make all sorts of changes to a stop sign, and an AI that has been trained to identify stop signs will still know it\u2019s a stop sign,\u201d says Tianfu Wu, co-author of a paper on the new work and an associate professor of electrical and computer engineering at North Carolina State University. \u201cHowever, if the AI has a vulnerability, and an attacker knows the vulnerability, the attacker could take advantage of the vulnerability and cause an accident.\u201d<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Artificial intelligence tools hold promise for applications ranging from autonomous vehicles to the interpretation of medical images. However, a new study finds these AI tools are more vulnerable than previously thought to targeted attacks that effectively force AI systems to make bad decisions. At issue are so-called \u201cadversarial attacks,\u201d in which someone manipulates the data [\u2026]<\/p>\n","protected":false},"author":707,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[11,6],"tags":[],"class_list":["post-178225","post","type-post","status-publish","format-standard","hentry","category-biotech-medical","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/178225","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/707"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=178225"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/178225\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=178225"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=178225"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=178225"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}