Comparing Effects of Attribution-based, Example-based, and Feature-based Explanation Methods on AI-Assisted Decision-Making
Trust calibration is essential in AI-assisted decision-making tasks. If human users understand the reasons for a prediction of an AI model, they can assess whether or not the prediction is reasonable. Especially for high-risk tasks like mushroom hunting (where a … Continued