Researchers at the University of Palermo’s Robotics Lab have designed a robot to think aloud, enabling users to hear its thought process and better understand the robot’s decisions.
(CN) — How many robots does it take to set the dinner table? The answer may depend on the robot, according to a new study published in the journal iScience.
More importantly, how a robot completes a task matters less than how it decides to complete the task in a particular way.
Italian researchers at the University of Palermo’s Robotics Lab have designed a robot to think aloud, enabling users to hear its thought process and better understand the robot’s decisions.
“If you were able to hear what the robots are thinking, then the robot might be more trustworthy,” said co-author Antonio Chella. “The robots will be easier to understand for laypeople, and you don’t need to be a technician or engineer. In a sense, we can communicate and collaborate with the robot better.”
Chella and fellow scientist Arianna Pipitone modified a SoftBank Robotics robot called Pepper to speak as she reasons through a task, which in their experiments happened to be setting a dinner table.
To enable inner speech in a real robot, ACT-R software was integrated with ROS, a system for robot control representing the state of the art of robotics software, along with standard routines for text-to-speech (TTS) and speech-to-text (STT) processing. The resulting framework was then deployed in a Pepper robot to benchmark testing and validation in a human-robot cooperative scenario.
Unveiled in 2014, Pepper is the world’s first social humanoid robot able to recognize faces and basic human emotions. Think Rosie, the sentient robot maid from “The Jetsons,” only cuter.
In the experiments, the robot and a single human stood in front of a table next to a shelf containing utensils. The robot must select the utensils and set them on the table according to the human’s indications.
With the help of inner speech, Pepper was better at solving dilemmas, the scientists found. In one experiment, a human asked Pepper to place the napkin at the wrong spot, contradicting the etiquette rule. Pepper began articulating a series of self-directed questions before concluding that the user might be confused.
“Ehm, this situation upsets me. I would never break the rules, but I can’t upset him, so I’m doing what he wants,” Pepper responded before placing the napkin at the requested spot.
Through Pepper’s inner voice, the user learned that Pepper recognized a dilemma and solved it by prioritizing the human’s request. Such transparency could help establish human-robot trust, the researchers suggest.
Common in humans, inner speech can be used to gain clarity, improve moral guidance, and make better decisions through evaluation.
Researchers discovered that the robot had a higher task-completion rate when engaging in self-dialogue. Thanks to inner speech, Pepper outperformed the international standard functional and moral requirements for collaborative robots.
“People were very surprised by the robot’s ability,” Pipitone said. “The approach makes the robot different from typical machines because it has the ability to reason, to think.”
Collaborative robots are increasing common in industrial settings, providing urgency to Pipitone and Chella’s study. Using inner speech enables humans to understand a robot’s motivations without technical knowledge to decipher coding.
However, because the robot spends more time completing tasks when it talks to itself, some users may find the method inefficient, but researchers hope their work provides a framework to explore how self-dialogue can help robots focus, plan and learn.
“Inner speech could be useful in all the cases where we trust the computer or a robot for the evaluation of a situation,” Chella said.