Ananya B. Sai
Cited by
Cited by
A survey of evaluation metrics used for NLG systems
AB Sai, AK Mohankumar, MM Khapra
ACM Computing Surveys (CSUR) 55 (2), 1-39, 2022
Improving dialog evaluation with a multi-reference adversarial dataset and large scale pretraining
AB Sai, AK Mohankumar, S Arora, MM Khapra
Transactions of the Association for Computational Linguistics 8, 810-827, 2020
Nl-augmenter: A framework for task-sensitive natural language augmentation
KD Dhole, V Gangal, S Gehrmann, A Gupta, Z Li, S Mahamood, ...
arXiv preprint arXiv:2112.02721, 2021
Re-evaluating ADEM: A deeper look at scoring dialogue responses
A Sai, M Gupta, M Khapra, M Srinivasan
AAAI, 2019
Perturbation CheckLists for evaluating NLG evaluation metrics
AB Sai, T Dixit, DY Sheth, S Mohan, MM Khapra
arXiv preprint arXiv:2109.05771, 2021
ElimiNet: A Model for Eliminating Options for Reading Comprehension with Multiple Choice Questions
S Parikh, A Sai, P Nema, MM Khapra
IJCAI, 2018
IndicMT Eval: A Dataset to Meta-Evaluate Machine Translation Metrics for Indian Languages
AB Sai, T Dixit, V Nagarajan, A Kunchukuttan, P Kumar, MM Khapra, ...
Proceedings of the 61st Annual Meeting of the Association for Computational …, 2023
A tutorial on evaluation metrics used in natural language generation
MM Khapra, AB Sai
Proceedings of the 2021 Conference of the North American Chapter of the …, 2021
BiPhone: Modeling Inter Language Phonetic Influences in Text
A Gupta, AB Sai, R Sproat, Y Vasilevski, JS Ren, A Jash, SS Sodhi, ...
arXiv preprint arXiv:2307.03322, 2023
Frustratingly Poor Performance of Reading Comprehension Models on Non-adversarial Examples
S Parikh, AB Sai, P Nema, MM Khapra
arXiv preprint arXiv:1904.02665, 2019
The system can't perform the operation now. Try again later.
Articles 1–10