Follow
Prasanna Parthasarathi
Title
Cited by
Cited by
Year
Extending neural generative conversational model using external knowledge sources
P Parthasarathi, J Pineau
arXiv preprint arXiv:1809.05524, 2018
1022018
Unnatural language inference
K Sinha, P Parthasarathi, J Pineau, A Williams
arXiv preprint arXiv:2101.00010, 2020
992020
Learning an unreferenced metric for online dialogue evaluation
K Sinha, P Parthasarathi, J Wang, R Lowe, WL Hamilton, J Pineau
arXiv preprint arXiv:2005.00583, 2020
942020
Attend, adapt and transfer: Attentive deep architecture for adaptive transfer from multiple sources in the same domain
J Rajendran, A Srinivas, MM Khapra, P Prasanna, B Ravindran
arXiv preprint arXiv:1510.02879, 2015
702015
Sometimes we want ungrammatical translations
P Parthasarathi, K Sinha, J Pineau, A Williams
Findings of the Association for Computational Linguistics: EMNLP 2021, 3205-3227, 2021
15*2021
Local structure matters most: Perturbation study in NLU
L Clouatre, P Parthasarathi, A Zouaq, S Chandar
arXiv preprint arXiv:2107.13955, 2021
152021
Deep learning on a healthy data diet: Finding important examples for fairness
A Zayed, P Parthasarathi, G Mordido, H Palangi, S Shabanian, S Chandar
Proceedings of the AAAI Conference on Artificial Intelligence 37 (12), 14593 …, 2023
132023
Maca: A modular architecture for conversational agents
HP Truong, P Parthasarathi, J Pineau
Proceedings of the 18th Annual SIGdial Meeting on Discourse and Dialogue, 93-102, 2017
122017
Adaapt: A deep architecture for adaptive policy transfer from multiple sources
J Rajendran, P Prasanna, B Ravindran, MM Khapra
arXiv preprint arXiv 1510, 2015
122015
Demystifying neural language models’ insensitivity to word-order
L Clouatre, P Parthasarathi, A Zouaq, S Chandar
arXiv preprint arXiv:2107.13955 5, 45-67, 2021
102021
Do Encoder Representations of Generative Dialogue Models have sufficient summary of the Information about the task?
P Parthasarathi, J Pineau, S Chandar
Proceedings of the 22nd Annual Meeting of the Special Interest Group on …, 2021
7*2021
Measuring the knowledge acquisition-utilization gap in pretrained language models
A Kazemnejad, M Rezagholizadeh, P Parthasarathi, S Chandar
arXiv preprint arXiv:2305.14775, 2023
42023
Memory augmented optimizers for deep learning
PA McRae*, P Parthasarathi*, M Assran, S Chandar
arXiv preprint arXiv:2106.10708, 2021
42021
Practical Takes on Federated Learning with Pretrained Language Models
A Agarwal, M Rezagholizadeh, P Parthasarathi
Findings of the Association for Computational Linguistics: EACL 2023, 454-471, 2023
32023
On task-level dialogue composition of generative transformer model
P Parthasarathi, A Neelakantan, S Narang
arXiv preprint arXiv:2010.04826, 2020
32020
Variational encoder decoder for image generation conditioned on captions
J Romoff, N Angelard-Gontier, P Parthasarathi
32016
A brief study on the effects of training generative dialogue models with a semantic loss
P Parthasarathi, M Abdelsalam, J Pineau, S Chandar
arXiv preprint arXiv:2106.10619, 2021
22021
EWEK-QA: Enhanced Web and Efficient Knowledge Graph Retrieval for Citation-based Question Answering Systems
M Dehghan, MA Alomrani, S Bagga, D Alfonso-Hermelo, K Bibi, ...
arXiv preprint arXiv:2406.10393, 2024
12024
Towards Practical Tool Usage for Continually Learning LLMs
J Huang, P Parthasarathi, M Rezagholizadeh, S Chandar
arXiv preprint arXiv:2404.09339, 2024
12024
Language Model-In-The-Loop: Data Optimal Approach to Learn-To-Recommend Actions in Text Games
A Vaithilingam Sudhakar, P Parthasarathi, J Rajendran, S Chandar
arXiv e-prints, arXiv: 2311.07687, 2023
1*2023
The system can't perform the operation now. Try again later.
Articles 1–20