Widely available artificial intelligence systems can be used to deliberately insert hard-to-detect security vulnerabilities into the code that defines computer chips, according to new research from the NYU Tandon School of Engineering, a warning about the potential weaponization of AI in hardware design.
In a study published by IEEE Security & Privacy, an NYU Tandon research team showed that large language models like ChatGPT could help both novices and experts create “hardware Trojans,” malicious modifications hidden within chip designs that can leak sensitive information, disable systems or grant unauthorized access to attackers.
To test whether AI could facilitate malicious hardware modifications, the researchers organized a competition over two years called the AI Hardware Attack Challenge as part of CSAW, an annual student-run cybersecurity event held by the NYU Center for Cybersecurity.