Poster
A Shapley-value Guided Rationale Editor for Rationale Learning
Zixin Kuang · Meng-Fen Chiang · Wang-Chien Lee
Rationale learning aims to automatically uncover the underlying explanations for NLP predictions. Previous studies in rationale learning mainly focus on the relevance of independent tokens with the predictions without considering their marginal contribution and the collective readability of extracted rationales. Through an empirical analysis, we argue that the sufficiency, informativeness, and readability of rationales are essential for explaining diverse end-task predictions. Accordingly, we propose Shapley-value Guided Rationale Editor (SHARE), an unsupervised approach that refines editable rationales while predicting task outcomes. SHARE extracts a sequence of tokens as a rationale, providing a collective explanation that is sufficient, informative, and readable. SHARE is highly adaptable for tasks like sentiment analysis, claim verification, and question answering, and can integrate seamlessly with various language models to provide explainability. Extensive experiments demonstrate its effectiveness in balancing sufficiency, informativeness, and readability across diverse applications. Our code and datasets are available at https://github.com/zixinK/SHARE.
Live content is unavailable. Log in and register to view live content