Toggle light / dark theme

Reward maximisation is one strategy that works for reinforcement learning to achieve general artificial intelligence. However, deep reinforcement learning algorithms shouldn’t depend on reward maximisation alone.


Identifying dual-purpose therapeutic targets implicated in aging and disease will extend healthspan and delay age-related health issues.

Introducing a novel visual tool for explaining the results of classification algorithms, with examples in R and Python.


Classification algorithms aim to identify to which groups a set of observations belong. A machine learning practitioner typically builds multiple models and selects a final classifier to be one that optimizes a set of accuracy metrics on a held-out test set. Sometimes, practitioners and stakeholders want more from the classification model than just predictions. They may wish to know the reasons behind a classifier’s decisions, especially when it is built for high-stakes applications. For instance, consider a medical setting, where a classifier determines a patient to be at high risk for developing an illness. If medical experts can learn the contributing factors to this prediction, they could use this information to help determine suitable treatments.

Some models, such as single decision trees, are transparent, meaning that they show the mechanism for how they make decisions. More complex models, however, tend to be the opposite — they are often referred to as “black boxes”, as they provide no explanation for how they arrive at their decisions. Unfortunately, opting for transparent models over black boxes does not always solve the explainability problem. The relationship between a set of observations and its labels is often too complex for a simple model to suffice; transparency can come at the cost of accuracy [1].

The increasing use of black-box models in high-stakes applications, combined with the need for explanations, has lead to the development of Explainable AI (XAI), a set of methods that help humans understand the outputs of machine learning models. Explainability is a crucial part of the responsible development and use of AI.

New research artificially creating a rare form of matter known as spin glass could spark a new paradigm in artificial intelligence by allowing algorithms to be directly printed as physical hardware. The unusual properties of spin glass enable a form of AI that can recognize objects from partial images much like the brain does and show promise for low-power computing, among other intriguing capabilities.

“Our work accomplished the first experimental realization of an artificial spin glass consisting of nanomagnets arranged to replicate a neural network,” said Michael Saccone, a post-doctoral researcher in at Los Alamos National Laboratory and lead author of the new paper in Nature Physics. “Our paper lays the groundwork we need to use these practically.”

Spin glasses are a way to think about material structure mathematically. Being free, for the first time, to tweak the interaction within these systems using electron-beam lithography makes it possible to represent a variety of computing problems in spin-glass networks, Saccone said.

One of Melbourne’s busiest roads will host a world-leading traffic management system using the latest technology to reduce traffic jams and improve road safety.

The ‘Intelligent Corridor’ at Nicholson Street, Carlton was launched by the University of Melbourne, Austrian technology firm Kapsch TrafficCom and the Victorian Department of Transport.

Covering a 2.5 kilometre stretch of Nicholson Street between Alexandra and Victoria Parades, the Intelligent Corridor will use sensors, cloud-based AI, machine learning algorithms, predictive models and real time-data capture to improve traffic management – easing congestion, improving road safety for cars, pedestrians and cyclists, and reducing emissions from clogged traffic.

In recent decades, machine learning and deep learning algorithms have become increasingly advanced, so much so that they are now being introduced in a variety of real-world settings. In recent years, some computer scientists and electronics engineers have been exploring the development of an alternative type of artificial intelligence (AI) tools, known as diffractive optical neural networks.

Diffractive optical neural networks are deep neural networks based on diffractive optical technology (i.e., lenses or other components that can alter the phase of light propagating through them). While these networks have been found to achieve ultra-fast computing speeds and high energy efficiencies, typically they are very difficult to program and adapt to different use cases.

Researchers at Southeast University, Peking University and Pazhou Laboratory in China have recently developed a diffractive deep neural network that can be easily programmed to complete different tasks. Their network, introduced in a paper published in Nature Electronics, is based on a flexible and multi-layer array.

Atomic clocks are the best sensors mankind has ever built. Today, they can be found in national standards institutes or satellites of navigation systems. Scientists all over the world are working to further optimize the precision of these clocks. Now, a research group led by Peter Zoller, a theorist from Innsbruck, Austria, has developed a new concept that can be used to operate sensors with even greater precision irrespective of which technical platform is used to make the sensor. “We answer the question of how precise a sensor can be with existing control capabilities, and give a recipe for how this can be achieved,” explain Denis Vasilyev and Raphael Kaubrügger from Peter Zoller’s group at the Institute of Quantum Optics and Quantum Information at the Austrian Academy of Sciences in Innsbruck.

For this purpose, the physicists use a method from processing: Variational quantum algorithms describe a circuit of quantum gates that depends on free parameters. Through optimization routines, the sensor autonomously finds the best settings for an optimal result. “We applied this technique to a problem from metrology—the science of measurement,” Vasilyev and Kaubrügger explain. “This is exciting because historically advances in were motivated by metrology, and in turn emerged from that. So, we’ve come full circle here,” Peter Zoller says. With the new approach, scientists can optimize quantum sensors to the point where they achieve the best possible precision technically permissible.