Poster
|
Tue 14:00 |
Improving Classifier Confidence using Lossy Label-Invariant Transformations Sooyong Jang · Insup Lee · James Weimer |
|
Poster
|
Tue 14:00 |
Have We Learned to Explain?: How Interpretability Methods Can Learn to Encode Predictions in their Interpretations. Neil Jethani · Mukund Sudarshan · Yindalon Aphinyanaphongs · Rajesh Ranganath |
|
Poster
|
Tue 14:00 |
Influence Decompositions For Neural Network Attribution Kyle Reing · Greg Ver Steeg · Aram Galstyan |
|
Poster
|
Tue 14:00 |
Shapley Flow: A Graph-based Approach to Interpreting Model Predictions Jiaxuan Wang · Jenna Wiens · Scott Lundberg |