Toggle light / dark theme

Chipmaker patches nine high-severity bugs in its Jetson SoC framework tied to the way it handles low-level cryptographic algorithms.

Flaws impacting millions of internet of things (IoT) devices running NVIDIA’s Jetson chips open the door for a variety of hacks, including denial-of-service (DoS) attacks or the siphoning of data.

NVIDIA released patches addressing nine high-severity vulnerabilities including eight additional bugs of less severity. The patches fix a wide swath of NVIDIA’s chipsets typically used for embedded computing systems, machine-learning applications and autonomous devices such as robots and drones.
Impacted products include Jetson chipset series; AGX Xavier, Xavier NX/TX1, Jetson TX2 (including Jetson TX2 NX), and Jetson Nano devices (including Jetson Nano 2GB) found in the NVIDIA JetPack software developers kit. The patches were delivered as part of NVIDIA’s June security bulletin, released Friday.

Last week, I wrote an analysis of Reward Is Enough, a paper by scientists at DeepMind. As the title suggests, the researchers hypothesize that the right reward is all you need to create the abilities associated with intelligence, such as perception, motor functions, and language.

This is in contrast with AI systems that try to replicate specific functions of natural intelligence such as classifying images, navigating physical environments, or completing sentences.

The researchers go as far as suggesting that with well-defined reward, a complex environment, and the right reinforcement learning algorithm, we will be able to reach artificial general intelligence, the kind of problem-solving and cognitive abilities found in humans and, to a lesser degree, in animals.

As the number of qubits in early quantum computers increases, their creators are opening up access via the cloud. IBM has its IBM Q network, for instance, while Microsoft has integrated quantum devices into its Azure cloud-computing platform. By combining these platforms with quantum-inspired optimisation algorithms and variable quantum algorithms, researchers could start to see some early benefits of quantum computing in the fields of chemistry and biology within the next few years. In time, Google’s Sergio Boixo hopes that quantum computers will be able to tackle some of the existential crises facing our planet. “Climate change is an energy problem – energy is a physical, chemical process,” he says.

“Maybe if we build the tools that allow the simulations to be done, we can construct a new industrial revolution that will hopefully be a more efficient use of energy.” But eventually, the area where quantum computers might have the biggest impact is in quantum physics itself.

The Large Hadron Collider, the world’s largest particle accelerator, collects about 300 gigabytes of data a second as it smashes protons together to try and unlock the fundamental secrets of the universe. To analyse it requires huge amounts of computing power – right now it’s split across 170 data centres in 42 countries. Some scientists at CERN – the European Organisation for Nuclear Research – hope quantum computers could help speed up the analysis of data by enabling them to run more accurate simulations before conducting real-world tests. They’re starting to develop algorithms and models that will help them harness the power of quantum computers when the devices get good enough to help.

A recent string of problems suggests facial recognition’s reliability issues are hurting people in a moment of need. Motherboard reports that there are ongoing complaints about the ID.me facial recognition system at least 21 states use to verify people seeking unemployment benefits. People have gone weeks or months without benefits when the Face Match system doesn’t verify their identities, and have sometimes had no luck getting help through a video chat system meant to solve these problems.

ID.me chief Blake Hall blamed the problems on users rather than the technology. Face Match algorithms have “99.9% efficacy,” he said, and there was “no relationship” between skin tone and recognition failures. Hall instead suggested that people weren’t sharing selfies properly or otherwise weren’t following instructions.

Motherboard noted that at least some people have three attempts to pass the facial recognition check, though. The outlet also pointed out that the company’s claims of national unemployment fraud costs have ballooned rapidly in just the past few months, from a reported $100 billion to $400 billion. While Hall attributed that to expanding “data points,” he didn’t say just how his firm calculated the damage. It’s not clear just what the real fraud threat is, in other words.

AI has finally come full circle.

A new suite of algorithms by Google Brain can now design computer chips —those specifically tailored for running AI software —that vastly outperform those designed by human experts. And the system works in just a few hours, dramatically slashing the weeks-or months-long process that normally gums up digital innovation.

At the heart of these robotic chip designers is a type of machine learning called deep reinforcement learning. This family of algorithms, loosely based on the human brain’s workings, has triumphed over its biological neural inspirations in games such as Chess, Go, and nearly the entire Atari catalog.

Circa 2019


As quantum computing enters the industrial sphere, questions about how to manufacture qubits at scale are becoming more pressing. Here, Fernando Gonzalez-Zalba, Tsung-Yeh Yang and Alessandro Rossi explain why decades of engineering may give silicon the edge.

In the past two decades, quantum computing has evolved from a speculative playground into an experimental race. The drive to build real machines that exploit the laws of quantum mechanics, and to use such machines to solve certain problems much faster than is possible with traditional computers, will have a major impact in several fields. These include speeding up drug discovery by efficiently simulating chemical reactions; better uses of “big data” thanks to faster searches in unstructured databases; and improved weather and financial-market forecasts via smart optimization protocols.

We are still in the early stages of building these quantum information processors. Recently, a team at Google has reportedly demonstrated a quantum machine that outperforms classical supercomputers, although this so-called “quantum supremacy” is expected to be too limited for useful applications. However, this is an important milestone in the field, testament to the fact that progress has become substantial and fast paced. The prospect of significant commercial revenues has now attracted the attention of large computing corporations. By channelling their resources into collaborations with academic groups, these firms aim to push research forward at a faster pace than either sector could accomplish alone.

“These are novel living machines. They are not a traditional robot or a known species of animals. It is a new class of artifacts: a living and programmable organism,” says Joshua Bongard, an expert in computer science and robotics at the University of Vermont (UVM) and one of the leaders of the find.

As the scientist explains, these living bots do not look like traditional robots : they do not have shiny gears or robotic arms. Rather, they look more like a tiny blob of pink meat in motion, a biological machine that researchers say can accomplish things traditional robots cannot.

Xenobots are synthetic organisms designed automatically by a supercomputer to perform a specific task, using a process of trial and error (an evolutionary algorithm), and are built by a combination of different biological tissues.

“Conditional witnessing” technique makes many-body entangled states easier to measure.


Quantum error correction – a crucial ingredient in bringing quantum computers into the mainstream – relies on sharing entanglement between many particles at once. Thanks to researchers in the UK, Spain and Germany, measuring those entangled states just got a lot easier. The new measurement procedure, which the researchers term “conditional witnessing”, is more robust to noise than previous techniques and minimizes the number of measurements required, making it a valuable method for testing imperfect real-life quantum systems.

Quantum computers run their algorithms on quantum bits, or qubits. These physical two-level quantum systems play an analogous role to classical bits, except that instead of being restricted to just “0” or “1” states, a single qubit can be in any combination of the two. This extra information capacity, combined with the ability to manipulate quantum entanglement between qubits (thus allowing multiple calculations to be performed simultaneously), is a key advantage of quantum computers.

The problem with qubits

However, qubits are fragile. Virtually any interaction with their environment can cause them to collapse like a house of cards and lose their quantum correlations – a process called decoherence. If this happens before an algorithm finishes running, the result is a mess, not an answer. (You would not get much work done on a laptop that had to restart every second.) In general, the more qubits a quantum computer has, the harder they are to keep quantum; even today’s most advanced quantum processors still have fewer than 100 physical qubits.