{"id":210860,"date":"2025-04-08T10:05:39","date_gmt":"2025-04-08T15:05:39","guid":{"rendered":"https:\/\/lifeboat.com\/blog\/2025\/04\/ai-threats-in-software-development-revealed-in-new-study"},"modified":"2025-04-08T10:05:39","modified_gmt":"2025-04-08T15:05:39","slug":"ai-threats-in-software-development-revealed-in-new-study","status":"publish","type":"post","link":"https:\/\/lifeboat.com\/blog\/2025\/04\/ai-threats-in-software-development-revealed-in-new-study","title":{"rendered":"AI threats in software development revealed in new study"},"content":{"rendered":"<p><a class=\"aligncenter blog-photo\" href=\"https:\/\/lifeboat.com\/blog.images\/ai-threats-in-software-development-revealed-in-new-study2.jpg\"><\/a><\/p>\n<p>UTSA researchers recently completed one of the most comprehensive studies to date on the risks of using AI models to develop software. In a new paper, they demonstrate how a specific type of error could pose a serious threat to programmers that use AI to help write code.<\/p>\n<p>Joe Spracklen, a UTSA doctoral student in computer science, led the study on how <a href=\"https:\/\/techxplore.com\/tags\/large+language+models\/\" rel=\"tag\" class=\"\">large language models<\/a> (LLMs) frequently generate insecure code.<\/p>\n<p>His team\u2019s paper, <a href=\"https:\/\/arxiv.org\/abs\/2406.10279\" target=\"_blank\">published<\/a> on the <i>arXiv<\/i> preprint server, has also been accepted for publication at the <a href=\"https:\/\/www.usenix.org\/conference\/usenixsecurity25\" target=\"_blank\">USENIX Security Symposium 2025<\/a>, a cybersecurity and privacy conference.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>UTSA researchers recently completed one of the most comprehensive studies to date on the risks of using AI models to develop software. In a new paper, they demonstrate how a specific type of error could pose a serious threat to programmers that use AI to help write code. Joe Spracklen, a UTSA doctoral student in [\u2026]<\/p>\n","protected":false},"author":732,"featured_media":0,"comment_status":"open","ping_status":"closed","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[34,6],"tags":[],"class_list":["post-210860","post","type-post","status-publish","format-standard","hentry","category-cybercrime-malcode","category-robotics-ai"],"_links":{"self":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/210860","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/users\/732"}],"replies":[{"embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/comments?post=210860"}],"version-history":[{"count":0,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/posts\/210860\/revisions"}],"wp:attachment":[{"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/media?parent=210860"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/categories?post=210860"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/lifeboat.com\/blog\/wp-json\/wp\/v2\/tags?post=210860"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}