Toggle light / dark theme

Image: Business Wire

Over half of the 415 vulnerabilities found in industrial control systems (ICS) were assigned CVSS v.3.0 base scores over 7 which are designated to security issues of high or critical risk levels, with 20% of vulnerable ICS devices being impacted by critical security issues.

As detailed in Kaspersky’s “Threat landscape or industrial automation systems H2 2018”, “The largest number of vulnerabilities affect industrial control systems that control manufacturing processes at various enterprises (115), in the energy sector (110), and water supply (63).”

Read more

Livingston is sitting comfortably in his office in Portland, Oregon, when he appears on the screens inside the car and announces he’ll be our teleoperator this afternoon. A moment later, the MKZ pulls into traffic, responding not to the man in the driver’s seat, but to Livingston, who’s sitting in front of a bank of screens displaying feeds from the four cameras on the car’s roof, working the kind of steering wheel and pedals serious players use for games like Forza Motorsport. Livingston is a software engineer for Designated Driver, a new company that’s getting into teleoperations, the official name for remotely controlling self- driving vehicles.


Designated Driver is just the latest competitor to enter the market for the teleoperation tech that will make robo-cars work.

Read more

Dr. Been Kim wants to rip open the black box of deep learning.

A senior researcher at Google Brain, Kim specializes in a sort of AI psychology. Like cognitive psychologists before her, she develops various ways to probe the alien minds of artificial neural networks (ANNs), digging into their gory details to better understand the models and their responses to inputs.

The more interpretable ANNs are, the reasoning goes, the easier it is to reveal potential flaws in their reasoning. And if we understand when or why our systems choke, we’ll know when not to use them—a foundation for building responsible AI.

Read more

Yoshua Bengio, Geoffrey Hinton, and Yann LeCun — sometimes called the ‘godfathers of AI’ — have been recognized with the $1 million annual prize for their work developing the AI subfield of deep learning. The techniques the trio developed in the 1990s and 2000s enabled huge breakthroughs in tasks like computer vision and speech recognition. Their work underpins the current proliferation of AI technologies, from self-driving cars to automated medical diagnoses.

Read more

It was a way for machines to see the world around them, recognize sounds and even understand natural language. But scientists had spent more than 50 years working on the concept of neural networks, and machines couldn’t really do any of that.

Backed by the Canadian government, Dr. Hinton, a computer science professor at the University of Toronto, organized a new research community with several academics who also tackled the concept. They included Yann LeCun, a professor at New York University, and Yoshua Bengio at the University of Montreal.

On Wednesday, the Association for Computing Machinery, the world’s largest society of computing professionals, announced that Drs. Hinton, LeCun and Bengio had won this year’s Turing Award for their work on neural networks. The Turing Award, which was introduced in 1966, is often called the Nobel Prize of computing, and it includes a $1 million prize, which the three scientists will share.


It is our common responsibility and interest to disseminate openly and honestly not only our success but also our failures. Together, we can realize our dreams for numerous robotic applications and devise a realistic plan to develop them.


The problem, as Giulio Sandini put it, occurs when one sells (or buys) intentions as results. Overselling is a dangerous strategy that can be counterproductive, even for the whole robotics community. Both companies and researchers publish videos of robots doing tasks, but sometimes they fail to point out the limitations of the technology or that those results were achieved in lab conditions. This makes it much more difficult to explain to non-roboticist industry executives the difference between creating a one-off demo and creating a real product that works reliably.

Deep learning, for example, is at the forefront of the AI revolution, but it is too often viewed as the magic train carrying us into the world of technological wonders. AI researchers are warning about overexcitement and that the next AI winter is coming.

The first cracks are already visible, as is the case of the promises claimed for self-driving cars. Rodney Brooks, founder of Rethink Robotics, regularly writes relevant essays on this topic on his blog. Robot ethics professor Noel Sharkey wrote an article in Forbes titled “Mama Mia It’s Sophia: A Show Robot or Dangerous Platform to Mislead?Tony Belpaeme, a social robot researcher from the University of Ghent, replied with a tweet, “I had [a European Union] project reviewer express disappointment in our slow research progress, as the Sophia bot clearly showed that the technical challenges we were still struggling with were solved.”