Sachin Kumar
Sachin Kumar
Allen Institute for AI
Verified email at - Homepage
Cited by
Cited by
Earth mover's distance pooling over siamese LSTMs for automatic short answer grading
S Kumar, S Chakrabarti, S Roy
Von mises-fisher loss for training sequence to sequence models with continuous outputs
S Kumar, Y Tsvetkov
arXiv preprint arXiv:1812.04616, 2018
Controlled text generation as continuous optimization with multiple constraints
S Kumar, E Malmi, A Severyn, Y Tsvetkov
Advances in Neural Information Processing Systems 34, 14542-14554, 2021
Language generation models can cause harm: So what can we do about it? an actionable survey
S Kumar, V Balachandran, L Njoo, A Anastasopoulos, Y Tsvetkov
arXiv preprint arXiv:2210.07700, 2022
Gradient-based constrained sampling from language models
S Kumar, B Paria, Y Tsvetkov
arXiv preprint arXiv:2205.12558, 2022
Ssd-lm: Semi-autoregressive simplex-based diffusion language model for text generation and modular control
X Han, S Kumar, Y Tsvetkov
arXiv preprint arXiv:2210.17432, 2022
Topics to avoid: Demoting latent confounds in text classification
S Kumar, S Wintner, NA Smith, Y Tsvetkov
arXiv preprint arXiv:1909.00453, 2019
Minding Language Models'(Lack of) Theory of Mind: A Plug-and-Play Multi-Character Belief Tracker
M Sclar, S Kumar, P West, A Suhr, Y Choi, Y Tsvetkov
arXiv preprint arXiv:2306.00924, 2023
Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ...
arXiv preprint arXiv:2402.00159, 2024
Machine translation into low-resource language varieties
S Kumar, A Anastasopoulos, S Wintner, Y Tsvetkov
arXiv preprint arXiv:2106.06797, 2021
On the blind spots of model-based evaluation metrics for text generation
T He, J Zhang, T Wang, S Kumar, K Cho, J Glass, Y Tsvetkov
arXiv preprint arXiv:2212.10020, 2022
Do all languages cost the same? tokenization in the era of commercial language models
O Ahia, S Kumar, H Gonen, J Kasai, DR Mortensen, NA Smith, Y Tsvetkov
arXiv preprint arXiv:2305.13707, 2023
Assessing language model deployment with risk cards
L Derczynski, HR Kirk, V Balachandran, S Kumar, Y Tsvetkov, MR Leiser, ...
arXiv preprint arXiv:2303.18190, 2023
Neural abstractive summarization with structural attention
T Chowdhury, S Kumar, T Chakraborty
arXiv preprint arXiv:2004.09739, 2020
A deep reinforced model for zero-shot cross-lingual summarization with bilingual semantic similarity rewards
ZY Dou, S Kumar, Y Tsvetkov
arXiv preprint arXiv:2006.15454, 2020
Referee: Reference-free sentence summarization with sharper controllability through symbolic knowledge distillation
M Sclar, P West, S Kumar, Y Tsvetkov, Y Choi
arXiv preprint arXiv:2210.13800, 2022
An exploration of data augmentation techniques for improving english to tigrinya translation
L Kidane, S Kumar, Y Tsvetkov
arXiv preprint arXiv:2103.16789, 2021
Rewardbench: Evaluating reward models for language modeling
N Lambert, V Pyatkin, J Morrison, LJ Miranda, BY Lin, K Chandu, N Dziri, ...
arXiv preprint arXiv:2403.13787, 2024
Ssd-2: Scaling and inference-time fusion of diffusion language models
X Han, S Kumar, Y Tsvetkov, M Ghazvininejad
arXiv preprint arXiv:2305.14771, 2023
End-to-end differentiable GANs for text generation
S Kumar, Y Tsvetkov
PMLR, 2020
The system can't perform the operation now. Try again later.
Articles 1–20