Toggle light / dark theme

UTSA researchers recently completed one of the most comprehensive studies to date on the risks of using AI models to develop software. In a new paper, they demonstrate how a specific type of error could pose a serious threat to programmers that use AI to help write code.

Joe Spracklen, a UTSA doctoral student in computer science, led the study on how (LLMs) frequently generate insecure code.

His team’s paper, published on the arXiv preprint server, has also been accepted for publication at the USENIX Security Symposium 2025, a cybersecurity and privacy conference.

Leave a Comment