artificial intelligence The results show that feedback has a more significant impact on improving users' trust in AI than explainability, but this enhanced trust does not lead to a corresponding performance improvement. Further exploration suggests that feedback induces users to over-trust (i.e., accept the AI's suggestions when it is wrong) or distrust (ignore the AI's suggestions when it is correct), which may negate the benefits of increased trust, leading to a "trust-performance paradox". The researchers call for future research to focus on how to design strategies to ensure that explanations foster appropriate trust to improve the efficiency of human-robot collaboration.
thesis
User experienceThe Interpretability of Artificial Intelligence and the Impact of Outcome Feedback on Trust: A Comparative Study
Q3: How does result feedback and model interpretability affect user task performance?
speech
The Interpretability of Artificial Intelligence and the Impact of Outcome Feedback on Trust: A Comparative Study | Xue Zhirong's knowledge base