Llama 2: Open foundation and fine-tuned chat models H Touvron, L Martin, K Stone, P Albert, A Almahairi, Y Babaei, ... arXiv preprint arXiv:2307.09288, 2023 | 8552 | 2023 |
fairseq: A fast, extensible toolkit for sequence modeling M Ott arXiv preprint arXiv:1904.01038, 2019 | 3184 | 2019 |
Dense passage retrieval for open-domain question answering V Karpukhin, B Oğuz, S Min, P Lewis, L Wu, S Edunov, D Chen, W Yih arXiv preprint arXiv:2004.04906, 2020 | 3039 | 2020 |
Multilingual denoising pre-training for neural machine translation Y Liu arXiv preprint arXiv:2001.08210, 2020 | 1767 | 2020 |
Understanding back-translation at scale S Edunov arXiv preprint arXiv:1808.09381, 2018 | 1315 | 2018 |
Beyond english-centric multilingual machine translation A Fan, S Bhosale, H Schwenk, Z Ma, A El-Kishky, S Goyal, M Baines, ... Journal of Machine Learning Research 22 (107), 1-48, 2021 | 768 | 2021 |
Scaling Neural Machine Translation. arXiv e-prints, page M Ott, S Edunov, D Grangier, M Auli arXiv preprint arXiv:1806.00187, 2018 | 662 | 2018 |
No Language Left Behind: Scaling Human-Centered Machine Translation N Team, MR Costa-jussą, J Cross, O Ēelebi, M Elbayad, K Heafield, ... arXiv e-prints, arXiv: 2207.04672, 2022 | 607* | 2022 |
One trillion edges: Graph processing at facebook-scale A Ching, S Edunov, M Kabiljo, D Logothetis, S Muthukrishnan Proceedings of the VLDB Endowment 8 (12), 1804-1815, 2015 | 565 | 2015 |
Facebook FAIR's WMT19 news translation task submission N Ng, K Yee, A Baevski, M Ott, M Auli, S Edunov arXiv preprint arXiv:1907.06616, 2019 | 421 | 2019 |
Cloze-driven pretraining of self-attention networks A Baevski, S Edunov, Y Liu, L Zettlemoyer, M Auli arXiv preprint arXiv:1903.07785, 2019 | 270 | 2019 |
CCMatrix: Mining billions of high-quality parallel sentences on the web H Schwenk, G Wenzek, S Edunov, E Grave, A Joulin arXiv preprint arXiv:1911.04944, 2019 | 223 | 2019 |
The llama 3 herd of models A Dubey, A Jauhri, A Pandey, A Kadian, A Al-Dahle, A Letman, A Mathur, ... arXiv preprint arXiv:2407.21783, 2024 | 220 | 2024 |
Classical structured prediction losses for sequence to sequence learning S Edunov, M Ott, M Auli, D Grangier, MA Ranzato arXiv preprint arXiv:1711.04956, 2017 | 209 | 2017 |
Three and a half degrees of separation S Edunov, C Diuk, IO Filiz, S Bhagat, M Burke Research at Facebook 694, 2016 | 166* | 2016 |
Pre-trained language model representations for language generation S Edunov, A Baevski, M Auli arXiv preprint arXiv:1903.09722, 2019 | 162 | 2019 |
Playing the lottery with rewards and multiple languages: lottery tickets in rl and nlp H Yu, S Edunov, Y Tian, AS Morcos arXiv preprint arXiv:1906.02768, 2019 | 142 | 2019 |
Effective long-context scaling of foundation models W Xiong, J Liu, I Molybog, H Zhang, P Bhargava, R Hou, L Martin, ... arXiv preprint arXiv:2309.16039, 2023 | 103 | 2023 |
On the evaluation of machine translation systems trained with back-translation S Edunov, M Ott, MA Ranzato, M Auli arXiv preprint arXiv:1908.05204, 2019 | 97 | 2019 |
Facebook ai wmt21 news translation task submission C Tran, S Bhosale, J Cross, P Koehn, S Edunov, A Fan arXiv preprint arXiv:2108.03265, 2021 | 92 | 2021 |