Toggle light / dark theme

Explainable AI (XAI) with Class Maps

Introducing a novel visual tool for explaining the results of classification algorithms, with examples in R and Python.


Classification algorithms aim to identify to which groups a set of observations belong. A machine learning practitioner typically builds multiple models and selects a final classifier to be one that optimizes a set of accuracy metrics on a held-out test set. Sometimes, practitioners and stakeholders want more from the classification model than just predictions. They may wish to know the reasons behind a classifier’s decisions, especially when it is built for high-stakes applications. For instance, consider a medical setting, where a classifier determines a patient to be at high risk for developing an illness. If medical experts can learn the contributing factors to this prediction, they could use this information to help determine suitable treatments.

Some models, such as single decision trees, are transparent, meaning that they show the mechanism for how they make decisions. More complex models, however, tend to be the opposite — they are often referred to as “black boxes”, as they provide no explanation for how they arrive at their decisions. Unfortunately, opting for transparent models over black boxes does not always solve the explainability problem. The relationship between a set of observations and its labels is often too complex for a simple model to suffice; transparency can come at the cost of accuracy [1].

The increasing use of black-box models in high-stakes applications, combined with the need for explanations, has lead to the development of Explainable AI (XAI), a set of methods that help humans understand the outputs of machine learning models. Explainability is a crucial part of the responsible development and use of AI.

Robot dog called in to help manage Pompeii

A four-legged robot called Spot has been deployed to wander around the ruins of ancient Pompeii, identifying structural and safety issues while delving underground to inspect tunnels dug by relic thieves.

The dog-like robot is the latest in a series of technologies used as part of a broader project to better manage the archaeological park since 2013, when Unesco threatened to add Pompeii to a list of world heritage sites in peril unless Italian authorities improved its preservation.

AI, the brain, and cognitive plausibility

This point was made clear in a recent paper by David Silver, Satinder Singh, Doina Precup, and Richard Sutton from DeepMind titled “Reward is Enough.” The authors argue that “maximizing reward is enough to drive behavior that exhibits most if not all attributes of intelligence.” However, reward is not enough. The statement itself is simplistic, vague, circular, and explains little because the assertion is meaningless outside highly structured and controlled environments. Besides, humans do many things for no reward at all, like writing fatuous papers about rewards.

The point is that suppose you or your team talk about how intelligent or cognitively plausible your solution is? I see this kind of solution arguing quite a bit. If so, you are not thinking enough about a specific problem or the people impacted by that problem. Practitioners and business-minded leaders need to know about cognitive plausibility because it reflects the wrong culture. Real-world problem solving solves the problems the world presents to intelligence whose solutions are not ever cognitively plausible. While insiders want their goals to be understood and shared by their solutions, your solution does not need to understand that it is solving a problem, but you do.

If you have a problem to solve that aligns with a business goal and seek an optimal solution to accomplish that goal, then how “cognitively plausible” some solution is, is unimportant. How a problem is solved is always secondary to if a problem is solved, and if you don’t care how, you can solve just about anything. The goal itself and how optimal a solution is for a problem are more important than how the goal is accomplished, if the solution was self-referencing, or what a solution looked like after you didn’t solve the problem.

Grand challenges in AI and data science

This conference will take place at EMBL Heidelberg, with a live streaming option for virtual participants free of charge. Proof of COVID-19 vaccination or recovery is required for on-site attendance. Please see EMBL’s COVID-19 terms and conditions.

Workshop registration is available only to EIROforum members. Please note the workshop is an on-site-only event and contact Iva Gavran for more information or use this link for registration.

Printing circuits on rare nanomagnets puts a new spin on computing

New research artificially creating a rare form of matter known as spin glass could spark a new paradigm in artificial intelligence by allowing algorithms to be directly printed as physical hardware. The unusual properties of spin glass enable a form of AI that can recognize objects from partial images much like the brain does and show promise for low-power computing, among other intriguing capabilities.

“Our work accomplished the first experimental realization of an artificial spin glass consisting of nanomagnets arranged to replicate a neural network,” said Michael Saccone, a post-doctoral researcher in at Los Alamos National Laboratory and lead author of the new paper in Nature Physics. “Our paper lays the groundwork we need to use these practically.”

Spin glasses are a way to think about material structure mathematically. Being free, for the first time, to tweak the interaction within these systems using electron-beam lithography makes it possible to represent a variety of computing problems in spin-glass networks, Saccone said.

World’s smartest traffic management system launched in Melbourne

One of Melbourne’s busiest roads will host a world-leading traffic management system using the latest technology to reduce traffic jams and improve road safety.

The ‘Intelligent Corridor’ at Nicholson Street, Carlton was launched by the University of Melbourne, Austrian technology firm Kapsch TrafficCom and the Victorian Department of Transport.

Covering a 2.5 kilometre stretch of Nicholson Street between Alexandra and Victoria Parades, the Intelligent Corridor will use sensors, cloud-based AI, machine learning algorithms, predictive models and real time-data capture to improve traffic management – easing congestion, improving road safety for cars, pedestrians and cyclists, and reducing emissions from clogged traffic.

Space Force using Spire data to detect satellite jamming

WASHINGTON — A constellation of about 40 geolocation satellites operated by Spire Global is collecting data used by the U.S. Space Force to detect GPS jamming, an issue now gaining worldwide attention due to Russia’s use of electronic warfare tactics in the run-up to the invasion of Ukraine.

“All of our fellow space companies … everyone is playing a vital role for humanity in this battle for freedom and democracy,” Spire CEO Peter Platzer told analysts March 9 in an earnings call.

Spire is providing GPS telemetry data to help detect jamming as part of a project run by the U.S. Space Systems Command to figure out way to automate manual data analysis techniques and produce more timely intelligence for military operations.

Dear everyone

We — educators, scientists, psychologists — started an educational non-profit Earthlings Hub, to help out the kids, affected by the war. We talk to them about STEM, but also about the complexity of the world, philosophy of science, future, and existential risks. We also offer psychological help to their parents. Our advisory board includes NASA astronaut Greg Chamitoff, lead AI researcher Joscha Bach, Professor of Learning and Cognition, author of Netlogo language Uri Wilensky, lead early math educator Maria Droujkova and others. Please share, participate, donate! https://www.earthlingshub.org/

/* */