Toggle light / dark theme

Researchers at Tokyo Metropolitan University have created a robotic system that could automate the cleaning of restrooms in convenience stores and other public spaces. This system, introduced in a paper published in Advanced Robotics, will be competing in the Future Convenience Store Challenge (FCSC) at the World Robot Summit (WRS), a competition for state-of-the-art technologies to automate convenience stores.

“Many provide restrooms for customers, and restroom cleaning is an essential part of the business,” Kazuyoshi Wada, one of the researchers who developed the system, told TechXplore. “While restroom cleaning is necessary for sanitary purposes, it involves mental and physical hard work. Clerks are often inappropriate for cleaning toilets in convenience stores; and maintaining consistent cleanliness levels is difficult because of the different perceptions of cleanliness among clerks.”

The WRS established the FCSC competition to encourage the development of new technologies that could enhance efficiency in convenience stores. Robotic systems that can autonomously clean restrooms and toilets could particularly help to improve hygiene, while simplifying the work of shop clerks and convenience store cleaners.

Associate Professor of the Department of Information Technologies and Computer Sciences at MISIS University, Ph.D., mathematician and doctor Alexandra Bernadotte has developed algorithms that significantly increase the accuracy of recognition of mental commands by robotic devices. The result is achieved by optimizing the selection of a dictionary. Algorithms implemented in robotic devices can be used to transmit information through noisy communication channels. The results have been published in the peer-reviewed international scientific journal Mathematics.

The task of improving the object (audio, video or electromagnetic signals) classification accuracy, when compiling so-called “dictionaries” of devices is faced by developers of different systems aimed to improve the quality of human life.

The simplest example is a voice assistant. Audio or video transmission devices for remote control of an object in the line-of-sight zone use a limited set of commands. At the same time, it is important that the commands classifier based on the accurately understands and does not confuse the commands included in the device dictionary. It also means that the recognition accuracy should not fall below a certain value in the presence of extraneous noise.

A tentacle robot can gently grasp fragile objects by entangling and ensnaring them – just as a jellyfish would.

Drawing inspiration from nature or, more specifically, from a jellyfish collecting stunned prey, a Harvard team of engineers developed a robotic gripper equipped with thin, soft tentacles to handle irregularly shaped or fragile objects.

A collection of pneumatic rubber tentacles – or filaments – are weak individually, but together they can grasp and securely hold heavy or oddly shaped items. They wrap around the objects by way of simple inflation without sensing, planning, or feedback control.

An artificial intelligence system from Google’s sibling company DeepMind stumbled on a new way to solve a foundational math problem at the heart of modern computing, a new study finds. A modification of the company’s game engine AlphaZero (famously used to defeat chess grandmasters and legends in the game of Go) outperformed an algorithm that had not been improved on for more than 50 years, researchers say.

The new research focused on multiplying grids of numbers known as matrices. Matrix multiplication is an operation key to many computational tasks, such as processing images, recognizing speech commands, training neural networks, running simulations to predict the weather, and compressing data for sharing on the Internet.

AI image generators, which create fantastical sights at the intersection of dreams and reality, bubble up on every corner of the web. Their entertainment value is demonstrated by an ever-expanding treasure trove of whimsical and random images serving as indirect portals to the brains of human designers. A simple text prompt yields a nearly instantaneous image, satisfying our primitive brains, which are hardwired for instant gratification.

Although seemingly nascent, the field of AI-generated art can be traced back as far as the 1960s with early attempts using symbolic rule-based approaches to make technical images. While the progression of models that untangle and parse words has gained increasing sophistication, the explosion of generative art has sparked debate around copyright, disinformation, and biases, all mired in hype and controversy.

Yilun Du, a Ph.D. student in the Department of Electrical Engineering and Computer Science and affiliate of MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), recently developed a new method that makes models like DALL-E 2 more creative and have better scene understanding. Here, Du describes how these models work, whether this technical infrastructure can be applied to other domains, and how we draw the line between AI and human creativity.

Over the last three decades, the digital world that we access through smartphones and computers has grown so rich and detailed that much of our physical world has a corresponding life in this digital reality. Today, the physical and digital realities are on a steady course to merging, as robots, Augmented Reality (AR) and wearable digital devices enter our physical world, and physical items get their digital twin computer representations in the digital world.

These digital twins can be uniquely identified and protected from manipulation thanks to crypto technologies like blockchains. The trust that these technologies provide is extremely powerful, helping to fight counterfeiting, increase supply chain transparency, and enable the circular economy. However, a weak point is that there is no versatile and generally applicable identifier of physical items that is as trustworthy as a blockchain. This breaks the connection between the physical and digital twins and therefore limits the potential of technical solutions.

In a new paper published in Light: Science & Applications, an interdisciplinary team of scientists led by Professors Jan Lagerwall (physics) and Holger Voos (robotics) from the University of Luxembourg, Luxembourg, and Prof. Mathew Schwartz (architecture, construction of the built environment) from the New Jersey Institute of Technology, U.S., propose an innovative solution to this problem where physical items are given unique and unclonable fingerprints realized using cholesteric spherical reflectors, or CSRs for short.

For decades researchers have worked to design robotic hands that mimic the dexterity of human hands in the ways they grasp and manipulate objects. However, these earlier robotic hands have not been able to withstand the physical impacts that can occur in unstructured environments. A research team has now developed a compact robotic finger for dexterous hands, while also being capable of withstanding physical impacts in its working environment.

The team of researchers from Harbin University of Technology (China) published their work in the journal Frontiers of Mechanical Engineering on October 14, 2022.

Robots often work in environments that are unpredictable and sometimes unsafe. Physical collisions cannot be avoided when multi-fingered robotic hands are required to work in unstructured environments, such as settings where obstacles move quickly or the robot is required to interact with humans or other robots.