Check out all the on-demand sessions from the Intelligent Security Summit here.
At their best, AI systems extend and augment the work we do, helping us to realize our goals. At their worst, they undermine them. We’ve all heard of high-profile instances of AI bias, like Amazon’s machine learning (ML) recruitment engine that discriminated against women or the racist results from Google Vision. These cases don’t just harm individuals; they work against their creators’ original intentions. Quite rightly, these examples attracted public outcry and, as a result, shaped perceptions of AI bias into something that is categorically bad and that we need to eliminate.
While most people agree on the need to build high-trust, fair AI systems, taking all bias out of AI is unrealistic. In fact, as the new wave of ML models go beyond the deterministic, they’re actively being designed with some level of subjectivity built in. Today’s most sophisticated systems are synthesizing inputs, contextualizing content and interpreting results. Rather than trying to eliminate bias entirely, organizations should seek to understand and measure subjectivity better.