Toggle light / dark theme

Explainable AI (XAI) with Class Maps

Introducing a novel visual tool for explaining the results of classification algorithms, with examples in R and Python.


Classification algorithms aim to identify to which groups a set of observations belong. A machine learning practitioner typically builds multiple models and selects a final classifier to be one that optimizes a set of accuracy metrics on a held-out test set. Sometimes, practitioners and stakeholders want more from the classification model than just predictions. They may wish to know the reasons behind a classifier’s decisions, especially when it is built for high-stakes applications. For instance, consider a medical setting, where a classifier determines a patient to be at high risk for developing an illness. If medical experts can learn the contributing factors to this prediction, they could use this information to help determine suitable treatments.

Some models, such as single decision trees, are transparent, meaning that they show the mechanism for how they make decisions. More complex models, however, tend to be the opposite — they are often referred to as “black boxes”, as they provide no explanation for how they arrive at their decisions. Unfortunately, opting for transparent models over black boxes does not always solve the explainability problem. The relationship between a set of observations and its labels is often too complex for a simple model to suffice; transparency can come at the cost of accuracy [1].

The increasing use of black-box models in high-stakes applications, combined with the need for explanations, has lead to the development of Explainable AI (XAI), a set of methods that help humans understand the outputs of machine learning models. Explainability is a crucial part of the responsible development and use of AI.

Printing circuits on rare nanomagnets puts a new spin on computing

New research artificially creating a rare form of matter known as spin glass could spark a new paradigm in artificial intelligence by allowing algorithms to be directly printed as physical hardware. The unusual properties of spin glass enable a form of AI that can recognize objects from partial images much like the brain does and show promise for low-power computing, among other intriguing capabilities.

“Our work accomplished the first experimental realization of an artificial spin glass consisting of nanomagnets arranged to replicate a neural network,” said Michael Saccone, a post-doctoral researcher in at Los Alamos National Laboratory and lead author of the new paper in Nature Physics. “Our paper lays the groundwork we need to use these practically.”

Spin glasses are a way to think about material structure mathematically. Being free, for the first time, to tweak the interaction within these systems using electron-beam lithography makes it possible to represent a variety of computing problems in spin-glass networks, Saccone said.

World’s smartest traffic management system launched in Melbourne

One of Melbourne’s busiest roads will host a world-leading traffic management system using the latest technology to reduce traffic jams and improve road safety.

The ‘Intelligent Corridor’ at Nicholson Street, Carlton was launched by the University of Melbourne, Austrian technology firm Kapsch TrafficCom and the Victorian Department of Transport.

Covering a 2.5 kilometre stretch of Nicholson Street between Alexandra and Victoria Parades, the Intelligent Corridor will use sensors, cloud-based AI, machine learning algorithms, predictive models and real time-data capture to improve traffic management – easing congestion, improving road safety for cars, pedestrians and cyclists, and reducing emissions from clogged traffic.

A diffractive neural network that can be flexibly programmed

In recent decades, machine learning and deep learning algorithms have become increasingly advanced, so much so that they are now being introduced in a variety of real-world settings. In recent years, some computer scientists and electronics engineers have been exploring the development of an alternative type of artificial intelligence (AI) tools, known as diffractive optical neural networks.

Diffractive optical neural networks are deep neural networks based on diffractive optical technology (i.e., lenses or other components that can alter the phase of light propagating through them). While these networks have been found to achieve ultra-fast computing speeds and high energy efficiencies, typically they are very difficult to program and adapt to different use cases.

Researchers at Southeast University, Peking University and Pazhou Laboratory in China have recently developed a diffractive deep neural network that can be easily programmed to complete different tasks. Their network, introduced in a paper published in Nature Electronics, is based on a flexible and multi-layer array.

Physicists report on first programmable quantum sensor

Atomic clocks are the best sensors mankind has ever built. Today, they can be found in national standards institutes or satellites of navigation systems. Scientists all over the world are working to further optimize the precision of these clocks. Now, a research group led by Peter Zoller, a theorist from Innsbruck, Austria, has developed a new concept that can be used to operate sensors with even greater precision irrespective of which technical platform is used to make the sensor. “We answer the question of how precise a sensor can be with existing control capabilities, and give a recipe for how this can be achieved,” explain Denis Vasilyev and Raphael Kaubrügger from Peter Zoller’s group at the Institute of Quantum Optics and Quantum Information at the Austrian Academy of Sciences in Innsbruck.

For this purpose, the physicists use a method from processing: Variational quantum algorithms describe a circuit of quantum gates that depends on free parameters. Through optimization routines, the sensor autonomously finds the best settings for an optimal result. “We applied this technique to a problem from metrology—the science of measurement,” Vasilyev and Kaubrügger explain. “This is exciting because historically advances in were motivated by metrology, and in turn emerged from that. So, we’ve come full circle here,” Peter Zoller says. With the new approach, scientists can optimize quantum sensors to the point where they achieve the best possible precision technically permissible.

An artificial intelligence invents 40,000 chemical weapons in just 6 hours

A.I. is only beginning to show what it can do for modern medicine.

In today’s society, artificial intelligence (A.I.) is mostly used for good. But what if it was not?

Naive thinking “The thought had never previously struck us. We were vaguely aware of security concerns around work with pathogens or toxic chemicals, but that did not relate to us; we primarily operate in a virtual setting. Our work is rooted in building machine learning models for therapeutic and toxic targets to better assist in the design of new molecules for drug discovery,” wrote the researchers in their paper. “We have spent decades using computers and A.I. to improve human health—not to degrade it. We were naive in thinking about the potential misuse of our trade, as our aim had always been to avoid molecular features that could interfere with the many different classes of proteins essential to human life.”

Full Story:


Researchers from Collaborations Pharmaceuticals tweaked artificial intelligence to look for chemical weapons, and impressively enough the machine learning algorithm found 40,000 options in just six hours.

Lensless Camera Captures Cellular-Level 3D Details

Rice University researchers have tested a tiny lensless microscope called Bio-FlatScope, capable of producing high levels of detail in living samples. The team imaged plants, hydra, and, to a limited extent, a human.

A previous iteration of the technology, FlatCam, was a lensless device that channeled light through a mask and directly onto a camera sensor, aimed primarily outward at the world at large. The raw images looked like static, but a custom algorithm translated the raw data into focused images.

The device described in current research looks inward to image micron-scale targets such as cells and blood vessels inside the body, and even through skin. The technology combines a sophisticated phase mask to generate patterns of light that fall directly onto the chip, the researchers said. The mask in the original FlatCam looked like a barcode and limited the amount of light that passes through to the sensor.

/* */