There is a growing desire to integrate rapidly advancing artificial intelligence (AI) technologies into Department of Defense (DoD) systems. AI may give battlefield advantage by helping improve the speed, quality, and accuracy of decision-making while enabling autonomy and assistive automation.
Due to the statistical nature of machine learning, a significant amount of work has focused on ensuring the robustness of AI-enabled systems at inference time to natural degradations in performance caused by data distribution shifts (for example, from a highly dynamic deployment environment).
However, as early as 2014, researchers demonstrated the ability to manipulate AI given adversary control of the input. Additional work has confirmed the theoretical risks of data poisoning, physically constrained adversarial patches for evasion, and model stealing attacks. These attacks are typically tested in simulated or physical environments with relatively pristine control compared to what might be expected on a battlefield.