A team of roboticists at the University of Canberra’s Collaborative Robotics Lab, working with a sociologist colleague from The Australian National University, has found humans interacting with an LLM-enabled humanoid robot had mixed reactions. In their paper published in the journal Scientific Reports, the group describes what they saw as they watched interactions between an LLM-enabled humanoid robot posted at an innovation festival and reviewed feedback given by people participating in the interactions.
Over the past couple of years, LLMs such as ChatGPT have taken the world by storm, with some going so far as to suggest that the new technology will soon make many human workers obsolete. Despite such fears, scientists continue to improve such technology, sometimes employing it in new places—such as inside an existing humanoid robot. That is what the team in Australia did—they added ChatGPT to the interaction facilities of a robot named Pepper and then posted the robot at an innovation festival in Canberra, where attendees were encouraged to interact with it.
Before it was given an LLM, Pepper was already capable of moving around autonomously and interacting with people on a relatively simple level. One of its hallmarks is its ability to maintain eye contact. Such abilities, the team suggested, made the robot a good target for testing human interactions with LLM-enabled humanoid robots “in the wild.”