Eric Wallace
TitleCited byYear
Pathologies of Neural Models Make Interpretations Difficult
S Feng, E Wallace, M Iyyer, P Rodriguez, II Grissom, J Boyd-Graber
EMNLP, 2018
38*2018
Universal Adversarial Triggers for Attacking and Analyzing NLP
E Wallace, S Feng, N Kandpal, M Gardner, S Singh
arXiv preprint arXiv:1908.07125, 2019
92019
Interpreting Neural Networks With Nearest Neighbors
E Wallace, S Feng, J Boyd-Graber
EMNLP BlackboxNLP, 2018
92018
Compositional Questions Do Not Necessitate Multi-hop Reasoning
S Min, E Wallace, S Singh, M Gardner, H Hajishirzi, L Zettlemoyer
ACL, 2019
82019
Trick Me If You Can: Human-in-the-loop Generation of Adversarial Question Answering Examples
E Wallace, P Rodriguez, S Feng, I Yamada, J Boyd-Graber
TACL, 2019
7*2019
Understanding Impacts of High-Order Loss Approximations and Features in Deep Learning Interpretation
S Singla, E Wallace, S Feng, S Feizi
ICML, 2019
42019
Misleading Failures of Partial-input Baselines
S Feng, E Wallace, J Boyd-Graber
ACL, 2019
32019
AllenNLP Interpret: A Framework for Explaining Predictions of NLP Models
E Wallace, J Tuyls, J Wang, S Subramanian, M Gardner, S Singh
EMNLP Demo, 2019
2019
Do NLP Models Know Numbers? Probing Numeracy in Embeddings
E Wallace, Y Wang, S Li, S Singh, M Gardner
EMNLP, 2019
2019
The system can't perform the operation now. Try again later.
Articles 1–9