Menu

Blog

Jan 8, 2021

Is neuroscience the key to protecting AI from adversarial attacks?

Posted by in categories: biotech/medical, cybercrime/malcode, neuroscience, robotics/AI

Deep learning has come a long way since the days when it could only recognize handwritten characters on checks and envelopes. Today, deep neural networks have become a key component of many computer vision applications, from photo and video editors to medical software and self-driving cars.

Roughly fashioned after the structure of the brain, neural networks have come closer to seeing the world as humans do. But they still have a long way to go, and they make mistakes in situations where humans would never err.

These situations, generally known as adversarial examples, change the behavior of an AI model in befuddling ways. Adversarial machine learning is one of the greatest challenges of current artificial intelligence systems. They can lead to machine learning models failing in unpredictable ways or becoming vulnerable to cyberattacks.

Comments are closed.