Researchers from NC State University have identified the first hardware vulnerability that allows attackers to compromise the data privacy of artificial intelligence (AI) users by exploiting the physical hardware on which AI is run.
The paper, “GATEBLEED: A Timing-Only Membership Inference Attack, MoE-Routing Inference, and a Stealthy, Generic Magnifier Via Hardware Power Gating in AI Accelerators,” will be presented at the IEEE/ACM International Symposium on Microarchitecture (MICRO 2025), being held Oct. 18–22 in Seoul, South Korea. The paper is currently available on the arXiv preprint server.
“What we’ve discovered is an AI privacy attack,” says Joshua Kalyanapu, first author of a paper on the work and a Ph.D. student at North Carolina State University. “Security attacks refer to stealing things actually stored somewhere in a system’s memory—such as stealing an AI model itself or stealing the hyperparameters of the model. That’s not what we found. Privacy attacks steal stuff not actually stored on the system, such as the data used to train the model and attributes of the data input to the model. These facts are leaked through the behavior of the AI model. What we found is the first vulnerability that allows successfully attacking AI privacy via hardware.”