{"id":223216,"date":"2025-10-10T05:21:46","date_gmt":"2025-10-10T10:21:46","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2025\/10\/ai-tools-can-help-hackers-plant-hidden-flaws-in-computer-chips-study-finds"},"modified":"2025-10-10T05:21:46","modified_gmt":"2025-10-10T10:21:46","slug":"ai-tools-can-help-hackers-plant-hidden-flaws-in-computer-chips-study-finds","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2025\/10\/ai-tools-can-help-hackers-plant-hidden-flaws-in-computer-chips-study-finds","title":{"rendered":"AI tools can help hackers plant hidden flaws in computer chips, study finds"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/ai-tools-can-help-hackers-plant-hidden-flaws-in-computer-chips-study-finds.jpg\"><\/a><\/p>\n<p>Widely available artificial intelligence systems can be used to deliberately insert hard-to-detect security vulnerabilities into the code that defines computer chips, according to new research from the NYU Tandon School of Engineering, a warning about the potential weaponization of AI in hardware design.<\/p>\n<p>In a study published by <a href=\"https:\/\/www.computer.org\/csdl\/magazine\/sp\/5555\/01\/11169309\/2a5vhnyZg6Q\" target=\"_blank\"><i>IEEE Security &amp; Privacy<\/i><\/a>, an NYU Tandon research team showed that <a href=\"https:\/\/techxplore.com\/tags\/large+language+models\/\" rel=\"tag\" class=\"\">large language models<\/a> like ChatGPT could help both novices and experts create \u201chardware Trojans,\u201d malicious modifications hidden within chip designs that can leak <a href=\"https:\/\/techxplore.com\/tags\/sensitive+information\/\" rel=\"tag\" class=\"\">sensitive information<\/a>, disable systems or grant unauthorized access to attackers.<\/p>\n<p>To test whether AI could facilitate malicious hardware modifications, the researchers organized a competition over two years called the AI Hardware Attack Challenge as part of CSAW, an annual student-run cybersecurity event held by the NYU Center for Cybersecurity.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Widely available artificial intelligence systems can be used to deliberately insert hard-to-detect security vulnerabilities into the code that defines computer chips, according to new research from the NYU Tandon School of Engineering, a warning about the potential weaponization of AI in hardware design. In a study published by IEEE Security &amp; Privacy, an NYU Tandon [\u2026]<\/p>\n","protected":false},"author":427,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[34,6],"tags":[],"class_list":["post-223216","post","type-post","status-publish","format-standard","hentry","category-cybercrime-malcode","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/223216","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/427"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=223216"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/223216\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=223216"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=223216"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=223216"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}