Toggle light / dark theme

By David Hambling

A robot pilot is learning to fly. It has passed its pilot’s test and flown its first plane, but it has also had its first mishap too.

Unlike a traditional autopilot, the ROBOpilot Unmanned Aircraft Conversion System literally takes the controls, pressing on foot pedals and handling the yoke using robotic arms. It reads the dials and meters with a computer vision system.

Black holes are some of the most powerful and fascinating phenomena in our Universe, but due to their tendency to swallow up anything nearby, getting up close to them for some detailed analysis isn’t possible right now.

Instead, scientists have put forward a proposal for how we might be able to model these massive, complex objects in the lab — using holograms.

While experiments haven’t yet been carried out, the researchers have put forward a theoretical framework for a black hole hologram that would allow us to test some of the more mysterious and elusive properties of black holes — specifically what happens to the laws of physics beyond its event horizon.

This week a collaborative effort among computer scientists and academics to safeguard data is winning attention and it has quantum computing written all over it.

The Netherlands’ Centrum Wiskunde & Informatica (CWI), national research institute for mathematics and computer science, had the story: IBM Research developed “quantum-safe algorithms” for securing data. They have done so by working with international partners including CWI and Radboud University in the Netherlands.

IBM and partners share concerns that data protected by current encryption methods may become insecure within the next 10 to 30 years.

A team of researchers at Yonsei University and École Polytechnique Fédérale de Lausanne (EPFL) has recently developed a new technique that can recognize emotions by analyzing people’s faces in images along with contextual features. They presented and outlined their deep learning-based architecture, called CAER-Net, in a paper pre-published on arXiv.

For several years, researchers worldwide have been trying to develop tools for automatically detecting by analyzing images, videos or audio clips. These tools could have numerous applications, for instance, improving robot-human interactions or helping doctors to identify signs of mental or neural disorders (e.g.„ based on atypical speech patterns, facial features, etc.).

So far, the majority of techniques for recognizing emotions in images have been based on the analysis of people’s facial expressions, essentially assuming that these expressions best convey humans’ emotional responses. As a result, most datasets for training and evaluating emotion recognition tools (e.g., the AFEW and FER2013 datasets) only contain cropped images of human faces.

A team of researchers at the University of Virginia has recently carried out a large-scale analysis aimed at identifying features in film trailers that best predict a movie’s genre and estimated budget. In their study, outlined in a paper pre-published on arXiv, the researchers specifically compared the effectiveness of visual, audio, text, and metadata-based features.

“Video understanding is the next frontier after image understanding,” Vicente Ordonez, one of the researchers who carried out the study, told TechXplore. “However, much work on understanding has so far focused on short clips with a human performing a single action. We wanted something longer, but there is also the issue of computational power. Video trailers seemed like an intermediate compromise, as they display a multitude of things, from scary to funny.”

Movie trailers are short and can easily be paired with movie descriptions. Ordonez and his colleagues realized that these characteristics make them ideal to investigate parallels between video and language.