Toggle light / dark theme

And they say computers can’t create art.


In 1642, famous Dutch painter Rembrandt van Rijn completed a large painting called Militia Company of District II under the Command of Captain Frans Banninck Cocq — today, the painting is commonly referred to as The Night Watch. It was the height of the Dutch Golden Age, and The Night Watch brilliantly showcased that.

The painting measured 363 cm × 437 cm (11.91 ft × 14.34 ft) — so big that the characters in it were almost life-sized, but that’s only the start of what makes it so special. Rembrandt made dramatic use of light and shadow and also created the perception of motion in what would normally be a stationary military group portrait. Unfortunately, though, the painting was trimmed in 1715 to fit between two doors at Amsterdam City Hall.

For over 300 years, the painting has been missing 60cm (2ft) from the left, 22cm from the top, 12cm from the bottom and 7cm from the right. Now, computer software has restored the missing parts.

A team of researchers working at Johannes Kepler University has developed an autonomous drone with a new type of technology to improve search-and-rescue efforts. In their paper published in the journal Science Robotics, the group describes their drone modifications. Andreas Birk with Jacobs University Bremen has published a Focus piece in the same journal issue outlining the work by the team in Austria.

Finding people lost (or hiding) in the forest is difficult because of the tree cover. People in planes and helicopters have difficulty seeing through the canopy to the ground below, where people might be walking or even laying down. The same problem exists for thermal applications—heat sensors cannot pick up readings adequately through the canopy. Efforts have been made to add drones to search-and–, but they suffer from the same problems because they are remotely controlled by pilots using them to search the ground below. In this new effort, the researchers have added new technology that both helps to see through the tree canopy and to highlight people that might be under it.

The new technology is based on what the researchers describe as an airborne optical sectioning algorithm—it uses the power of a computer to defocus occluding objects such as the tops of . The second part of the new device uses thermal imaging to highlight the heat emitted from a warm body. A machine-learning application then determines if the heat signals are those of humans, animals or other sources. The new hardware was then affixed to a standard autonomous . The computer in the drone uses both locational positioning to determine where to search and cues from the AOS and thermal sensors. If a possible match is made, the drone automatically moves closer to a target to get a better look. If its sensors indicate a match, it signals the research team giving them the coordinates. In testing their newly outfitted drones over 17 field experiments, the researchers found it was able to locate 38 of 42 people hidden below tree canopies.

Without GPS, autonomous systems get lost easily. Now a new algorithm developed at Caltech allows autonomous systems to recognize where they are simply by looking at the terrain around them—and for the first time, the technology works regardless of seasonal changes to that terrain.

Details about the process were published on June 23 in the journal Science Robotics.

The general process, known as visual terrain-relative navigation (VTRN), was first developed in the 1960s. By comparing nearby terrain to high-resolution satellite images, can locate themselves.

If you walk down the street shouting out the names of every object you see — garbage truck! bicyclist! sycamore tree! — most people would not conclude you are smart. But if you go through an obstacle course, and you show them how to navigate a series of challenges to get to the end unscathed, they would.

Most machine learning algorithms are shouting names in the street. They perform perceptive tasks that a person can do in under a second. But another kind of AI — deep reinforcement learning — is strategic. It learns how to take a series of actions in order to reach a goal. That’s powerful and smart — and it’s going to change a lot of industries.

Two industries on the cusp of AI transformations are manufacturing and supply chain. The ways we make and ship stuff are heavily dependent on groups of machines working together, and the efficiency and resiliency of those machines are the foundation of our economy and society. Without them, we can’t buy the basics we need to live and work.

Chipmaker patches nine high-severity bugs in its Jetson SoC framework tied to the way it handles low-level cryptographic algorithms.

Flaws impacting millions of internet of things (IoT) devices running NVIDIA’s Jetson chips open the door for a variety of hacks, including denial-of-service (DoS) attacks or the siphoning of data.

NVIDIA released patches addressing nine high-severity vulnerabilities including eight additional bugs of less severity. The patches fix a wide swath of NVIDIA’s chipsets typically used for embedded computing systems, machine-learning applications and autonomous devices such as robots and drones.
Impacted products include Jetson chipset series; AGX Xavier, Xavier NX/TX1, Jetson TX2 (including Jetson TX2 NX), and Jetson Nano devices (including Jetson Nano 2GB) found in the NVIDIA JetPack software developers kit. The patches were delivered as part of NVIDIA’s June security bulletin, released Friday.

Last week, I wrote an analysis of Reward Is Enough, a paper by scientists at DeepMind. As the title suggests, the researchers hypothesize that the right reward is all you need to create the abilities associated with intelligence, such as perception, motor functions, and language.

This is in contrast with AI systems that try to replicate specific functions of natural intelligence such as classifying images, navigating physical environments, or completing sentences.

The researchers go as far as suggesting that with well-defined reward, a complex environment, and the right reinforcement learning algorithm, we will be able to reach artificial general intelligence, the kind of problem-solving and cognitive abilities found in humans and, to a lesser degree, in animals.

As the number of qubits in early quantum computers increases, their creators are opening up access via the cloud. IBM has its IBM Q network, for instance, while Microsoft has integrated quantum devices into its Azure cloud-computing platform. By combining these platforms with quantum-inspired optimisation algorithms and variable quantum algorithms, researchers could start to see some early benefits of quantum computing in the fields of chemistry and biology within the next few years. In time, Google’s Sergio Boixo hopes that quantum computers will be able to tackle some of the existential crises facing our planet. “Climate change is an energy problem – energy is a physical, chemical process,” he says.

“Maybe if we build the tools that allow the simulations to be done, we can construct a new industrial revolution that will hopefully be a more efficient use of energy.” But eventually, the area where quantum computers might have the biggest impact is in quantum physics itself.

The Large Hadron Collider, the world’s largest particle accelerator, collects about 300 gigabytes of data a second as it smashes protons together to try and unlock the fundamental secrets of the universe. To analyse it requires huge amounts of computing power – right now it’s split across 170 data centres in 42 countries. Some scientists at CERN – the European Organisation for Nuclear Research – hope quantum computers could help speed up the analysis of data by enabling them to run more accurate simulations before conducting real-world tests. They’re starting to develop algorithms and models that will help them harness the power of quantum computers when the devices get good enough to help.

A recent string of problems suggests facial recognition’s reliability issues are hurting people in a moment of need. Motherboard reports that there are ongoing complaints about the ID.me facial recognition system at least 21 states use to verify people seeking unemployment benefits. People have gone weeks or months without benefits when the Face Match system doesn’t verify their identities, and have sometimes had no luck getting help through a video chat system meant to solve these problems.

ID.me chief Blake Hall blamed the problems on users rather than the technology. Face Match algorithms have “99.9% efficacy,” he said, and there was “no relationship” between skin tone and recognition failures. Hall instead suggested that people weren’t sharing selfies properly or otherwise weren’t following instructions.

Motherboard noted that at least some people have three attempts to pass the facial recognition check, though. The outlet also pointed out that the company’s claims of national unemployment fraud costs have ballooned rapidly in just the past few months, from a reported $100 billion to $400 billion. While Hall attributed that to expanding “data points,” he didn’t say just how his firm calculated the damage. It’s not clear just what the real fraud threat is, in other words.

AI has finally come full circle.

A new suite of algorithms by Google Brain can now design computer chips —those specifically tailored for running AI software —that vastly outperform those designed by human experts. And the system works in just a few hours, dramatically slashing the weeks-or months-long process that normally gums up digital innovation.

At the heart of these robotic chip designers is a type of machine learning called deep reinforcement learning. This family of algorithms, loosely based on the human brain’s workings, has triumphed over its biological neural inspirations in games such as Chess, Go, and nearly the entire Atari catalog.