Follow
Satwik Bhattamishra
Title
Cited by
Cited by
Year
Are NLP Models really able to Solve Simple Math Word Problems?
A Patel, S Bhattamishra, N Goyal
NAACL 2021, 2021
4452021
Submodular optimization-based diverse paraphrasing and its effectiveness in data augmentation
A Kumar*, S Bhattamishra*, M Bhandari, P Talukdar
NAACL 2019, 3609-3619, 2019
1262019
On the ability and limitations of transformers to recognize formal languages
S Bhattamishra, K Ahuja, N Goyal
EMNLP 2020, 2020
1152020
On the computational power of transformers and its implications in sequence modeling
S Bhattamishra, A Patel, N Goyal
CoNLL 2020, 2020
602020
Unsung challenges of building and deploying language technologies for low resource language communities
P Joshi, C Barnes, S Santy, S Khanuja, S Shah, A Srinivasan, ...
arXiv preprint arXiv:1912.03457, 2019
402019
Revisiting the Compositional Generalization Abilities of Neural Sequence Models
A Patel, S Bhattamishra, P Blunsom, N Goyal
ACL 2022, 2022
232022
Simplicity Bias in Transformers and their Ability to Learn Sparse Boolean Functions
S Bhattamishra, A Patel, V Kanade, P Blunsom
ACL 2023, 2022
212022
Understanding in-context learning in transformers and llms by learning to learn discrete functions
S Bhattamishra, A Patel, P Blunsom, V Kanade
ICLR 2024, 2023
202023
On the Practical Ability of Recurrent Neural Networks to Recognize Hierarchical Languages
S Bhattamishra, K Ahuja, N Goyal
COLING 2020, 2020
142020
On the Ability of Self-Attention Networks to Recognize Counter Languages
S Bhattamishra, K Ahuja, N Goyal
Proceedings of the 2020 Conference on Empirical Methods in Natural Language …, 2020
102020
MAGNIFICo: Evaluating the In-Context Learning Ability of Large Language Models to Generalize to Novel Interpretations
A Patel, S Bhattamishra, S Reddy, D Bahdanau
EMNLP 2023, 2023
32023
Dynaquant: Compressing deep learning training checkpoints via dynamic quantization
A Agrawal, S Reddy, S Bhattamishra, VPS Nookala, V Vashishth, K Rong, ...
arXiv preprint arXiv:2306.11800, 2023
22023
Separations in the Representational Capabilities of Transformers and Recurrent Architectures
S Bhattamishra, M Hahn, P Blunsom, V Kanade
arXiv preprint arXiv:2406.09347, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–13