Toggle light / dark theme

Model moves computers closer to understanding human conversation

An engineer from the Johns Hopkins Center for Language and Speech Processing has developed a machine learning model that can distinguish functions of speech in transcripts of dialogs outputted by language understanding, or LU, systems in an approach that could eventually help computers “understand” spoken or written text in much the same way that humans do.

Developed by CLSP Assistant Research Scientist Piotr Zelasko, the new model identifies the intent behind words and organizes them into categories such as “Statement,” “Question,” or “Interruption,” in the final transcript: a task called “dialog act recognition.” By providing other models with a more organized and segmented version of text to work with, Zelasko’s model could become a first step in making sense of a conversation, he said.

“This new method means that LU systems no longer have to deal with huge, unstructured chunks of text, which they struggle with when trying to classify things such as the topic, sentiment, or intent of the text. Instead, they can work with a series of expressions, which are saying very specific things, like a question or interruption. My model enables these systems to work where they might have otherwise failed,” said Zelasko, whose study appeared recently in Transactions of the Association for Computational Linguistics.

A deep learning framework to estimate the pose of robotic arms and predict their movements

As robots are gradually introduced into various real-world environments, developers and roboticists will need to ensure that they can safely operate around humans. In recent years, they have introduced various approaches for estimating the positions and predicting the movements of robots in real-time.

Researchers at the Universidade Federal de Pernambuco in Brazil have recently created a new deep learning model to estimate the pose of robotic arms and predict their movements. This model, introduced in a paper pre-published on arXiv, is specifically designed to enhance the safety of robots while they are collaborating or interacting with humans.

“Motivated by the need to anticipate accidents during (HRI), we explore a framework that improves the safety of people working in close proximity to robots,” Djamel H. Sadok, one of the researchers who carried out the study, told TechXplore. “Pose detection is seen as an important component of the overall solution. To this end, we propose a new architecture for Pose Detection based on Self-Calibrated Convolutions (SCConv) and Extreme Learning Machine (ELM).”

Sniffing out your identity with breath biometrics

Biometric authentication like fingerprint and iris scans are a staple of any spy movie, and trying to circumvent those security measures is often a core plot point. But these days the technology is not limited to spies, as fingerprint verification and facial recognition are now common features on many of our phones.

Now, researchers have developed a new potential odorous option for the security toolkit: your breath. In a report published in Chemical Communications, researchers from Kyushu University’s Institute for Materials Chemistry and Engineering, in collaboration with the University of Tokyo, have developed an olfactory sensor capable of identifying individuals by analyzing the compounds in their breath.

Combined with machine learning, this “artificial nose,” built with a 16-channel sensor array, was able to authenticate up to 20 individuals with an average accuracy of more than 97%.

Technology helps self-driving cars learn from their own memories

An autonomous vehicle is able to navigate city streets and other less-busy environments by recognizing pedestrians, other vehicles and potential obstacles through artificial intelligence. This is achieved with the help of artificial neural networks, which are trained to “see” the car’s surroundings, mimicking the human visual perception system.

But unlike humans, cars using have no memory of the past and are in a constant state of seeing the world for the first time—no matter how many times they’ve driven down a particular road before. This is particularly problematic in adverse weather conditions, when the car cannot safely rely on its sensors.

Researchers at the Cornell Ann S. Bowers College of Computing and Information Science and the College of Engineering have produced three concurrent research papers with the goal of overcoming this limitation by providing the car with the ability to create “memories” of previous experiences and use them in future navigation.

What If Human Brains Had AI Implants? | Unveiled

Artificial intelligence; it’s everywhere! Our homes, our cars, our schools and work. So where, if ever, does it stop? And how close to ourselves can our devices reasonably get? For this video, Unveiled uncovers plans to use human brain implants to improve the performance of our brains! What do you think? Are neural implants a good thing, or a bad thing?

This is Unveiled, giving you incredible answers to extraordinary questions!

Find more amazing videos for your curiosity here:
What If Humanity Was A Type III Civilisation? — https://www.youtube.com/watch?v=jcx_nKWZ4Uw.
Why the Microverse Might Be a Reality — https://www.youtube.com/watch?v=BF6n-bjYr7Y

Are you constantly curious? Then subscribe for more from Unveiled ► https://goo.gl/GmtyPv.

#AI #ArtificialIntelligence #Advanced #Future #Futuristic #Technology #Civilization

Amazon Unveils First Fully Autonomous Mobile Robot Called Proteus

The autonomous robot will initially be deployed in the outbound GoCart handling areas of Amazon’s fulfillment and sort centers. “Our vision is to automate GoCart handling throughout the network, which will help reduce the need for people to manually move heavy objects through our facility and instead let them focus on more rewarding work,” the company said.

Celebrating 10 years of robotic evolution, Amazon unveiled several more automated systems. Cardinal (Opens in a new window) is an AI-based robot arm that can select one package out of a pile of boxes, lift it, read the label, and place it on the appropriate GoCart (for Proteus to collect). Amazon is currently trialing a Cardinal prototype for handling packages up to 50 pounds and expects to deploy the technology in fulfillment centers next year.

Nvidia’s New AI Model can Convert still Images to 3D Graphics

View insights.


Nvidia has made another attempt to add depth to shallow graphics. After converting 2D images into 3D scenes, models, and videos, the company has turned its focus to editing. The GPU giant today unveiled a new AI method that transforms still photos into 3D objects that creators can modify with ease. Nvidia researchers have developed a new inverse rendering pipeline, Nvidia 3D MoMa that allows users to reconstruct a series of still photos into a 3D computer model of an object, or even a scene. The key benefit of this workflow, compared to more traditional photogrammetry methods, is its ability to output clean 3D models capable of being imported and edited out-of-the-box by 3D gaming and visual engines.

According to reports, other photogrammetry programs will turn 2D images into 3D models, Nvidia’s 3D MoMa technology takes it a step further by producing mesh, material, and lighting information of the subjects and outputting it in a format that’s compatible with existing 3D graphics engines and modeling tools. And it’s all done in a relatively short timeframe, with Nvidia saying 3D MoMa can generate triangle mesh models within an hour using a single Nvidia Tensor Core GPU.

David Luebke, Nvidia’s VP of graphics research, describes the technique with India Today as “a holy grail unifying computer vision and computer graphics.”

Top Military Drones | Military Technologies and Weapons | Drones 2022

https://youtube.com/watch?v=YEMzfMfLtmY&feature=share

👉For business inquiries: [email protected].
✅ Instagram: https://www.instagram.com/pro_robots.

You’re on PRO Robots and in this issue we’re going to talk about the best military drones of the 21st century. Today, more than 100 countries are developing military drones, constantly innovating to make them faster, more powerful and smarter. Drones are used for reconnaissance, surveillance, target detection, munitions delivery, and enemy strikes. The vehicles can fly autonomously or be operated by an operator, return to the base or play the role of a kamikaze. See an overview of the best military drones and trends in the development of combat drones in one video!

0:00 Intro.
0:39 Drone Development.
1:20 Northrop Grumman.
2:02 Aksungur drone.
2:56 Drone MQ-20 Avenger.
3:44 The Hermes 900 Drone.
4:40 Heron TP
5:27 Chinese Drone HongDu GJ-11 Sharp Sword.
6:06 CASC Rainbow.
6:58 Drone MQ-9 Reaper.
7:58 CAIG Wing Loong II
8:46 Drone X-47B
9:31 Russian drone S-70 «Hunter»
10:41 Results.

#prorobots #robots #robot #futuretechnologies #robotics.

More interesting and useful content:
✅ Elon Musk Innovation https://www.youtube.com/playlist?list=PLcyYMmVvkTuQ-8LO6CwGWbSCpWI2jJqCQ
✅Future Technologies Reviews https://www.youtube.com/playlist?list=PLcyYMmVvkTuTgL98RdT8-z-9a2CGeoBQF
✅ Technology news.
https://www.facebook.com/PRO.Robots.Info.

#prorobots #technology #roboticsnews.

High Energy Lasers

Raytheon Intelligence & Space’s high-energy laser systems use photons, or particles of light, to carry out military missions and civil defense. This directed energy technology enables detection of threats, tracking during maneuvers, and positive visual identification to defeat a wide range of threats, including unmanned aerial systems, rockets, artillery and mortars.


Raytheon Intelligence & Space’s laser solutions are a set of technologies that use photons, or particles of light, to carry out military missions. They measure distance, designate targets and can defeat a wide range of threats, including UAS.

/* */