Toggle light / dark theme

An international team of scientists, led by the University of Leeds, have assessed how robotics and autonomous systems might facilitate or impede the delivery of the UN Sustainable Development Goals (SDGs).

Their findings identify key opportunities and key threats that need to be considered while developing, deploying and governing robotics and autonomous systems.

The key opportunities robotics and autonomous systems present are through autonomous task completion, supporting human activities, fostering innovation, enhancing and improving monitoring. Emerging threats relate to reinforcing inequalities, exacerbating , diverting resources from tried-and-tested solutions, and reducing freedom and privacy through inadequate governance.

Artificial intelligence (AI) and machine learning techniques have proved to be very promising for completing numerous tasks, including those that involve processing and generating language. Language-related machine learning models have enabled the creation of systems that can interact and converse with humans, including chatbots, smart assistants, and smart speakers.

To tackle dialog-oriented tasks, language models should be able to learn high-quality dialog representations. These are representations that summarize the different ideas expressed by two parties who are conversing about specific topics and how these dialogs are structured.

Researchers at Northwestern University and AWS AI Labs have recently developed a self-supervised learning model that can learn effective dialog representations for different types of dialogs. This model, introduced in a paper pre-published on arXiv, could be used to develop more versatile and better performing dialog systems using a limited amount of training data.

An engineer from the Johns Hopkins Center for Language and Speech Processing has developed a machine learning model that can distinguish functions of speech in transcripts of dialogs outputted by language understanding, or LU, systems in an approach that could eventually help computers “understand” spoken or written text in much the same way that humans do.

Developed by CLSP Assistant Research Scientist Piotr Zelasko, the new model identifies the intent behind words and organizes them into categories such as “Statement,” “Question,” or “Interruption,” in the final transcript: a task called “dialog act recognition.” By providing other models with a more organized and segmented version of text to work with, Zelasko’s model could become a first step in making sense of a conversation, he said.

“This new method means that LU systems no longer have to deal with huge, unstructured chunks of text, which they struggle with when trying to classify things such as the topic, sentiment, or intent of the text. Instead, they can work with a series of expressions, which are saying very specific things, like a question or interruption. My model enables these systems to work where they might have otherwise failed,” said Zelasko, whose study appeared recently in Transactions of the Association for Computational Linguistics.

As robots are gradually introduced into various real-world environments, developers and roboticists will need to ensure that they can safely operate around humans. In recent years, they have introduced various approaches for estimating the positions and predicting the movements of robots in real-time.

Researchers at the Universidade Federal de Pernambuco in Brazil have recently created a new deep learning model to estimate the pose of robotic arms and predict their movements. This model, introduced in a paper pre-published on arXiv, is specifically designed to enhance the safety of robots while they are collaborating or interacting with humans.

“Motivated by the need to anticipate accidents during (HRI), we explore a framework that improves the safety of people working in close proximity to robots,” Djamel H. Sadok, one of the researchers who carried out the study, told TechXplore. “Pose detection is seen as an important component of the overall solution. To this end, we propose a new architecture for Pose Detection based on Self-Calibrated Convolutions (SCConv) and Extreme Learning Machine (ELM).”

Biometric authentication like fingerprint and iris scans are a staple of any spy movie, and trying to circumvent those security measures is often a core plot point. But these days the technology is not limited to spies, as fingerprint verification and facial recognition are now common features on many of our phones.

Now, researchers have developed a new potential odorous option for the security toolkit: your breath. In a report published in Chemical Communications, researchers from Kyushu University’s Institute for Materials Chemistry and Engineering, in collaboration with the University of Tokyo, have developed an olfactory sensor capable of identifying individuals by analyzing the compounds in their breath.

Combined with machine learning, this “artificial nose,” built with a 16-channel sensor array, was able to authenticate up to 20 individuals with an average accuracy of more than 97%.

An autonomous vehicle is able to navigate city streets and other less-busy environments by recognizing pedestrians, other vehicles and potential obstacles through artificial intelligence. This is achieved with the help of artificial neural networks, which are trained to “see” the car’s surroundings, mimicking the human visual perception system.

But unlike humans, cars using have no memory of the past and are in a constant state of seeing the world for the first time—no matter how many times they’ve driven down a particular road before. This is particularly problematic in adverse weather conditions, when the car cannot safely rely on its sensors.

Researchers at the Cornell Ann S. Bowers College of Computing and Information Science and the College of Engineering have produced three concurrent research papers with the goal of overcoming this limitation by providing the car with the ability to create “memories” of previous experiences and use them in future navigation.

Artificial intelligence; it’s everywhere! Our homes, our cars, our schools and work. So where, if ever, does it stop? And how close to ourselves can our devices reasonably get? For this video, Unveiled uncovers plans to use human brain implants to improve the performance of our brains! What do you think? Are neural implants a good thing, or a bad thing?

This is Unveiled, giving you incredible answers to extraordinary questions!

Find more amazing videos for your curiosity here:
What If Humanity Was A Type III Civilisation? — https://www.youtube.com/watch?v=jcx_nKWZ4Uw.
Why the Microverse Might Be a Reality — https://www.youtube.com/watch?v=BF6n-bjYr7Y

Are you constantly curious? Then subscribe for more from Unveiled ► https://goo.gl/GmtyPv.

#AI #ArtificialIntelligence #Advanced #Future #Futuristic #Technology #Civilization

The autonomous robot will initially be deployed in the outbound GoCart handling areas of Amazon’s fulfillment and sort centers. “Our vision is to automate GoCart handling throughout the network, which will help reduce the need for people to manually move heavy objects through our facility and instead let them focus on more rewarding work,” the company said.

Celebrating 10 years of robotic evolution, Amazon unveiled several more automated systems. Cardinal (Opens in a new window) is an AI-based robot arm that can select one package out of a pile of boxes, lift it, read the label, and place it on the appropriate GoCart (for Proteus to collect). Amazon is currently trialing a Cardinal prototype for handling packages up to 50 pounds and expects to deploy the technology in fulfillment centers next year.

View insights.


Nvidia has made another attempt to add depth to shallow graphics. After converting 2D images into 3D scenes, models, and videos, the company has turned its focus to editing. The GPU giant today unveiled a new AI method that transforms still photos into 3D objects that creators can modify with ease. Nvidia researchers have developed a new inverse rendering pipeline, Nvidia 3D MoMa that allows users to reconstruct a series of still photos into a 3D computer model of an object, or even a scene. The key benefit of this workflow, compared to more traditional photogrammetry methods, is its ability to output clean 3D models capable of being imported and edited out-of-the-box by 3D gaming and visual engines.

According to reports, other photogrammetry programs will turn 2D images into 3D models, Nvidia’s 3D MoMa technology takes it a step further by producing mesh, material, and lighting information of the subjects and outputting it in a format that’s compatible with existing 3D graphics engines and modeling tools. And it’s all done in a relatively short timeframe, with Nvidia saying 3D MoMa can generate triangle mesh models within an hour using a single Nvidia Tensor Core GPU.

David Luebke, Nvidia’s VP of graphics research, describes the technique with India Today as “a holy grail unifying computer vision and computer graphics.”