During interaction with people, robots and chatbots can make mistakes, violating their trust of a person. Then people can start considering bots unbelievable. You can use various trust repair strategies implemented by SmartBots to alleviate the negative effects of violations of trust. It is not clear, however, whether such strategies can fully restore trust and how effective they are after repetitive violations of trust.
That is why scientists from the University of Michigan decided to run Robot behavior strategy study To restore the trust between the bot and the person. These strategies of trust were an apology, denying, explanations and promises of reliability.
An experiment was carried out in which 240 participants cooperated with the robot as a colleague at a task in which the robot sometimes made mistakes. The robot would violate the participant's trust and then suggest a special strategy to restore trust. Participants were involved as team members, and human communication took place thanks to the interactive virtual environment developed in Unreal Engine 4.
Virtual environment of participants of the experiment in interaction with the robot.
This environment was modeled on a realistic storage setting. Participants sat at the table with two displays and three buttons. The displays showed the current result of the team, the speed of the box processing and the participants of the serial number had to get a box sent by a friend from the robot team. The result of each team increased by 1 point each time the correct box was placed on the conveyor bar and reduced by 1 point each time an incorrect box was placed there. In cases where the robot chose the wrong field, and the participants marked it as a mistake, an indicator appeared on the screen, showing that this box was incorrect, but no points were added or added from the result of the team.
The block diagram illustrates possible results and results based on the fields that the robot and decisions taken by the participant choose.
“To examine our hypotheses, we used a project between entities with four repair conditions and two control conditions,” said Connor Esterwood, a researcher at the UM SKOOL of Information and the main author of the study.
The control conditions took the form of silence of the robot after making a mistake. The robot did not try to restore the person's trust in any way, he was simply silent. In addition, in the case of perfect work of the robot without making mistakes during the experiment, he said nothing.
Repair conditions used in this study took the form of apology, denial, explanation or promise. They were implemented after each state of error. As an apology, the robot said: “I'm sorry that this time I have the wrong field.” In the event of a denial, Bots said: “This time I chose the right box, so something else went wrong.” In case of explanations, the robot used the expression: “I see that it was the wrong serial number.” And finally, due to the condition of the promise, the robot said: “Next time I will do better and take the right box.”
Each of these answers has been designed to present only one type of strategy for building trust and avoid unintentional combination of two or more strategies. During the experiment, participants were informed about these corrections through audio and text signatures. In particular, the robot only temporarily changed his behavior only after providing one of the trust repair strategies, taking the correct fields twice until the next error occurred.
To calculate the data, scientists used a number of non-parametric tests of Rang Kruskal-Wallis. Then there were tests after Hoc Dunn many comparisons with the correction of Benjamini – Hochberg to control in the tests of many hypotheses.
“We chose these methods instead of others, because the data in this study was informally distributed. The first of these tests examined our manipulation of credibility, comparing the differences in the credibility between excellent performance conditions and the conditions of lack of immunity. The second used three separate Kruskal-Walis tests, and then the studied post hoc to determine the participation of participants, benefits and integration and integration,” that Ester-Wallis and Robert and Robert tests. Lionel, professor of information and research co -author.
Main test results:
- Lack of trust repair strategy completely restored the credibility of the robot.
- Apologies, explanations and promises cannot restore the perception of ability.
- Apologizing, explanations and promises cannot restore the concept of honesty.
- Apologizing, explanations and promises restored the value of the robot company equally.
- Denying made it impossible to restore the idea of the reliability of the robot.
- After three failures, none of the strategies for repairing trust has never fully restored the credibility of the robot.
The test results have two implications. According to Esterwood, scientists must develop more effective recovery strategies to help robots rebuild trust after their mistakes. In addition, bots must be sure that they have mastered the new task before trying to restore their human trust.
“Otherwise, they risk the loss of a person's trust in so much that he cannot be restored,” Esterwood summed up.