Toggle light / dark theme

Social robots, robots that can interact with humans and assist them in their daily lives, are gradually being introduced in numerous real-world settings. These robots could be particularly valuable for helping older adults to complete everyday tasks more autonomously, thus potentially enhancing their independence and well-being.

Researchers at University of Bari have been investigating the potential using for ambient assisted living applications for numerous years. Their most recent paper, published in UMAP’22 Adjunct: Adjunct Proceedings of the 30th ACM Conference on User Modeling, Adaptation and Personalization, specifically explores the value of allowing social robots who are assisting seniors to learn the relationships between a user’s routines and his/her .

“Social robots should support with and, at the same time, they should contribute to emotional wellness by considering affective factors in everyday situations,” Berardina De Carolis, Stefano Ferilli and Nicola Macciarulo wrote in their paper. “The main goal of this research is to investigate whether it is possible to learn relations between the user’s affective state state and , made by activities, with the aid of a social robot, Pepper in this case.”

To efficiently navigate real-world environments, robots typically analyze images collected by imaging devices that are integrated within their body. To enhance the performance of robots, engineers have thus been trying to develop different types of highly performing cameras, sensors and artificial vision systems.

Many artificial systems developed so far draw inspiration from the eyes of humans, animals, insects and fish. These systems have different features and characteristics, depending on the in which they are designed to operate in.

Most existing sensors and cameras are designed to work either in on the ground (i.e., in terrestrial environments) or in (i.e., in ). Bio-inspired artificial vision systems that can operate in both terrestrial and aquatic environments, on the other hand, remain scarce.

Researchers at Oxford University’s Department of Materials, working in collaboration with colleagues from Exeter and Munster, have developed an on-chip optical processor capable of detecting similarities in datasets up to 1,000 times faster than conventional machine learning algorithms running on electronic processors.

The new research published in Optica took its inspiration from Nobel Prize laureate Ivan Pavlov’s discovery of classical conditioning. In his experiments, Pavlov found that by providing another stimulus during feeding, such as the sound of a bell or metronome, his dogs began to link the two experiences and would salivate at the sound alone. The repeated associations of two unrelated events paired together could produce a learned response—a conditional reflex.

Co-first author Dr. James Tan You Sian, who did this work as part of his DPhil in the Department of Materials, University of Oxford, said, “Pavlovian associative learning is regarded as a basic form of learning that shapes the behavior of humans and animals—but adoption in AI systems is largely unheard of. Our research on Pavlovian learning in tandem with optical parallel processing demonstrates the exciting potential for a variety of AI tasks.”

India is a developing country where Robotics is still lagging because of the lack of availability of Robotics components, 3D printed parts and suitable quality motors. The science of Robotics requires a high-end technology to be implemented by researchers or Robotics scientists.

Despite all these, if a teacher (not a Robotics Scientist) can do something in the field and able to develop a prototype that is comparable with the Robots developed by big facilitated and resourceful companies with the help of their best engineers on a huge budget, then the person deserves the appreciation.

A computer science teacher in Kendriya Vidyalaya, IIT Bombay, Dinesh Kunwar Patel has developed the world’s first social and educational humanoid Robot ‘Shalu’ that can speak 47 languages, including 9 Indian and 38 foreign languages. The homemade Robot ‘Shalu’ is made of waste materials, including cardboard, wood, and aluminium.

AI Dungeon project first became available in December 2019 and attracted attention due to its advanced artificial intelligence — it generated coherent text adventures, where you could perform any action by typing it in the input window. Now AI Dungeon has appeared on Steam.

The gameplay looks like 1970s text quests: you’re told what’s going on, and you write what you’re going to do. But while the 1970s quests only accepted tightly constrained answers, AI Dungeon tries to adjust by all means what you type in. You only have to specify the type of input: action, word, or event.

AI Dungeon also allows you to generate a world, fill it with details of your choice, or dive into the worlds of other players. In addition, you can control the progress of an incident: force II to create a paragraph again or even rewrite it completely by hand. Occurrences allow you to go through with your friends.

Dubbed GRACE, the robot hand can even bend fingers and make realistic human movements.

Scientists have developed a new type of artificial muscle that can lift 1,000 times its own weight * 3D actuators were combined to form a real-life robot hand that could lift 8kg * The high-strength properties could be applied to create higher capabilities in other body parts and a range of devices.

A team of researchers from the Italian Institute of Technology has just developed a new class of high-strength artificial muscles that can stretch and contract like a human muscle in a way that has never been done before. According to a recent research paper, the muscles perform with a level of versatility and grace closely matched to life-like movements, and provide a boost in the development of three-dimensional functional devices such as artificial body parts. class of strong pneumatic artificial muscles has been developed and combined to form a robot hand that can lift up to thousand times its own weight.

The autonomous, miniaturized robot could mimic movements used in surgery in space remotely.

MIRA, short for miniaturized in vivo robotic assistant\.


An autonomous, miniaturized robot could soon perform simulated tasks that mimic movements used in surgery without the help of doctors or astronauts.

Meet MIRA, short for miniaturized in vivo robotic assistant. Invented by Nebraska Engineering Professor Shane Farritor, the surgical robot is being readied for a 2024 test mission aboard the International Space Station. For this, NASA recently awarded the University of Nebraska-Lincoln $100,000 through the Established Program to Stimulate Competitive Research (EPSCoR) at the University of Nebraska Omaha.

View insights.


In a paper distributed via ArXiv, titled “Exploring the Unprecedented Privacy Risks of the Metaverse,” boffins at UC Berkeley in the US and the Technical University of Munich in Germany play-tested an “escape room” virtual reality (VR) game to better understand just how much data a potential attacker could access. Through a 30-person study of VR usage, the researchers – Vivek Nair (UCB), Gonzalo Munilla Garrido (TUM), and Dawn Song (UCB) – created a framework for assessing and analyzing potential privacy threats. They identified more than 25 examples of private data attributes available to potential attackers, some of which would be difficult or impossible to obtain from traditional mobile or web applications. The metaverse that is rapidly becoming a part of our world has long been an essential part of the gaming community. Interaction-based games like Second Life, Pokemon Go, and Minecraft have existed as virtual social interaction platforms. The founder of Second Life, Philip Rosedale, and many other security experts have lately been vocal about Meta’s impact on data privacy. Since the core concept is similar, it is possible to determine the potential data privacy issues apparently within Meta.

There has been a buzz going around the tech market that by the end of 2022, the metaverse can revive the AR/VR device shipments and take it as high as 14.19 million units, compared to 9.86 million in 2021, indicating a year-over-year increase of about 35% to 36%. The AR/VR device market will witness an enormous boom in the market due to component shortages and the difficulty to develop new technologies. The growth momentum will also be driven by the increased demand for remote interactivity stemming from the pandemic. But what will happen when these VR or metaverse headsets start stealing your precious data? Not just headsets but smart glasses too are prime suspect when it comes to privacy concerns.

Several weeks ago, Facebook introduced a new line of smart glasses called Ray-Ban Stories, which can take photos, shoot 30-second videos, and post them on the owner’s Facebook feed. Priced at US$299 and powered by Facebook’s virtual assistant, the web-connected shades can also take phone calls and play music or podcasts.