Toggle light / dark theme

AI tools can help hackers plant hidden flaws in computer chips, study finds

Widely available artificial intelligence systems can be used to deliberately insert hard-to-detect security vulnerabilities into the code that defines computer chips, according to new research from the NYU Tandon School of Engineering, a warning about the potential weaponization of AI in hardware design.

In a study published by IEEE Security & Privacy, an NYU Tandon research team showed that like ChatGPT could help both novices and experts create “hardware Trojans,” malicious modifications hidden within chip designs that can leak , disable systems or grant unauthorized access to attackers.

To test whether AI could facilitate malicious hardware modifications, the researchers organized a competition over two years called the AI Hardware Attack Challenge as part of CSAW, an annual student-run cybersecurity event held by the NYU Center for Cybersecurity.

Leave a Comment

Lifeboat Foundation respects your privacy! Your email address will not be published.

/* */