Toggle light / dark theme

For a robot to be able to “learn” sign language, it is necessary to combine different areas of engineering such as artificial intelligence, neural networks and artificial vision, as well as underactuated robotic hands. “One of the main new developments of this research is that we united two major areas of Robotics: complex systems (such as robotic hands) and social interaction and communication,” explains Juan Víctores, one of the researchers from the Robotics Lab in the Department of Systems Engineering and Automation of the UC3M.

The first thing the scientists did as part of their research was to indicate, through a simulation, the specific position of each phalanx in order to depict particular signs from Spanish Sign Language. They then attempted to reproduce this position with the robotic hand, trying to make the movements similar to those a human hand could make. “The objective is for them to be similar and, above all, natural. Various types of were tested to model this adaptation, and this allowed us to choose the one that could perform the gestures in a way that is comprehensible to people who communicate with sign language,” the researchers explain.

Finally, the scientists verified that the system worked by interacting with potential end-users. “The who have been in contact with the robot have reported 80 percent satisfaction, so the response has been very positive,” says another of the researchers from the Robotics Lab, Jennifer J. Gago. The experiments were carried out with TEO (Task Environment Operator), a for home use developed in the Robotics Lab of the UC3M.

A vegetable-picking robot that uses machine learning to identify and harvest a commonplace, but challenging, agricultural crop has been developed by engineers.

The ‘Vegebot’, developed by a team at the University of Cambridge, was initially trained to recognise and harvest iceberg lettuce in a lab setting. It has now been successfully tested in a variety of field conditions in cooperation with G’s Growers, a local fruit and vegetable co-operative.

Although the prototype is nowhere near as fast or efficient as a human worker, it demonstrates how the use of robotics in agriculture might be expanded, even for like iceberg lettuce which are particularly challenging to harvest mechanically. The results are published in The Journal of Field Robotics.

By Donna Lu

It’s the smartest piece of glass in the world. Zongfu Yu at the University of Wisconsin–Madison and his colleagues have created a glass artificial intelligence that uses light to recognise and distinguish between images. What’s more, the glass AI doesn’t need to be powered to operate.

The proof-of-principle glass AI that Yu’s team created can distinguish between handwritten single digit numbers. It can also recognise, in real time, when a number it is presented with changes – for instance, when a 3 is altered to an 8.

Analysis: humans make about 35,000 decisions every day so is it possible for AI to deal with a similar volume of high decision uncertainty?

Artificial intelligence (AI) that can think for itself may still seem like something from a science-fiction film. In the recent TV series Westworld, Robert Ford, played by Anthony Hopkins, gave a thought-provoking speech: “we can’t define consciousness because consciousness does not exist. Humans fancy that there’s something special about the way we perceive the world and yet, we live in loops as tight and as closed as the [robots] do, seldom questioning our choices – content, for the most part, to be told what to do next.”

Mimicking realistic human-like cognition in AI has recently become more plausible. This is especially the case in Computational Neuroscience, a rapidly expanding research area that involves the computational modelling of the brain to provide quantitative, computational theories.

First published in 2016, predictors of chronological and biological age developed using deep learning (DL) are rapidly gaining popularity in the aging research community.

These deep aging clocks can be used in a broad range of applications in the pharmaceutical industry, spanning target identification, drug discovery, data economics, and synthetic patient data generation. We provide here a brief overview of recent advances in this important subset, or perhaps superset, of aging clocks that have been developed using artificial intelligence (AI).

DARPA’s Ground X-Vehicle Technologies (GXV-T) program aims to improve mobility, survivability, safety, and effectiveness of future combat vehicles without piling on armor. The demonstrations featured here show progress on technologies for traveling quickly over varied terrain and improving situational awareness and ease of operation.

These demonstrations feature technologies developed for DARPA by:

1) carnegie mellon university, national robotics engineering center. 2) honeywell international 3) pratt & miller 4) qinetiq 5) raytheon BBN technologies