Menu

Blog

Oct 25, 2020

Adversarial Machine Learning Threat Matrix

Posted by in categories: cybercrime/malcode, robotics/AI, transportation

Microsoft, in collaboration with MITRE research organization and a dozen other organizations, including IBM, Nvidia, Airbus, and Bosch, has released the Adversarial ML Threat Matrix, a framework that aims to help cybersecurity experts prepare attacks against artificial intelligence models.

With AI models being deployed in several fields, there is a rise in critical online threats jeopardizing their safety and integrity. The Adversarial Machine Learning (ML) Threat Matrix attempts to assemble various techniques employed by malicious adversaries in destabilizing AI systems.

AI models perform several tasks, including identifying objects in images by analyzing the information they ingest for specific common patterns. The researchers have developed malicious patterns that hackers could introduce into the AI systems to trick these models into making mistakes. An Auburn University team had even managed to fool a Google LLC image recognition model into misclassifying objects in photos by slightly adjusting the objects’ position in each input image.

Comments are closed.