Menu

Blog

Mar 28, 2022

Explainable AI (XAI) with Class Maps

Posted by in categories: biotech/medical, information science, robotics/AI

Introducing a novel visual tool for explaining the results of classification algorithms, with examples in R and Python.


Classification algorithms aim to identify to which groups a set of observations belong. A machine learning practitioner typically builds multiple models and selects a final classifier to be one that optimizes a set of accuracy metrics on a held-out test set. Sometimes, practitioners and stakeholders want more from the classification model than just predictions. They may wish to know the reasons behind a classifier’s decisions, especially when it is built for high-stakes applications. For instance, consider a medical setting, where a classifier determines a patient to be at high risk for developing an illness. If medical experts can learn the contributing factors to this prediction, they could use this information to help determine suitable treatments.

Some models, such as single decision trees, are transparent, meaning that they show the mechanism for how they make decisions. More complex models, however, tend to be the opposite — they are often referred to as “black boxes”, as they provide no explanation for how they arrive at their decisions. Unfortunately, opting for transparent models over black boxes does not always solve the explainability problem. The relationship between a set of observations and its labels is often too complex for a simple model to suffice; transparency can come at the cost of accuracy [1].

The increasing use of black-box models in high-stakes applications, combined with the need for explanations, has lead to the development of Explainable AI (XAI), a set of methods that help humans understand the outputs of machine learning models. Explainability is a crucial part of the responsible development and use of AI.

Comments are closed.