Follow
Jeffrey Wu
Jeffrey Wu
OpenAI
Verified email at openai.com
Title
Cited by
Cited by
Year
Language models are few-shot learners
T Brown, B Mann, N Ryder, M Subbiah, JD Kaplan, P Dhariwal, ...
Advances in neural information processing systems 33, 1877-1901, 2020
235462020
Language models are unsupervised multitask learners
A Radford, J Wu, R Child, D Luan, D Amodei, I Sutskever
OpenAI blog 1 (8), 9, 2019
18370*2019
Training language models to follow instructions with human feedback
L Ouyang, J Wu, X Jiang, D Almeida, C Wainwright, P Mishkin, C Zhang, ...
Advances in neural information processing systems 35, 27730-27744, 2022
59532022
Generative pretraining from pixels
M Chen, A Radford, R Child, J Wu, H Jun, D Luan, I Sutskever
International conference on machine learning, 1691-1703, 2020
14172020
Scaling laws for neural language models
J Kaplan, S McCandlish, T Henighan, TB Brown, B Chess, R Child, ...
arXiv preprint arXiv:2001.08361, 2020
10522020
Gpt-4 technical report
J Achiam, S Adler, S Agarwal, L Ahmad, I Akkaya, FL Aleman, D Almeida, ...
arXiv preprint arXiv:2303.08774, 2023
1037*2023
Learning to summarize with human feedback
N Stiennon, L Ouyang, J Wu, D Ziegler, R Lowe, C Voss, A Radford, ...
Advances in Neural Information Processing Systems 33, 3008-3021, 2020
9722020
Fine-tuning language models from human preferences
DM Ziegler, N Stiennon, J Wu, TB Brown, A Radford, D Amodei, ...
arXiv preprint arXiv:1909.08593, 2019
8192019
Webgpt: Browser-assisted question-answering with human feedback
R Nakano, J Hilton, S Balaji, J Wu, L Ouyang, C Kim, C Hesse, S Jain, ...
arXiv preprint arXiv:2112.09332, 2021
6852021
Release strategies and the social impacts of language models
I Solaiman, M Brundage, J Clark, A Askell, A Herbert-Voss, J Wu, ...
arXiv preprint arXiv:1908.09203, 2019
4252019
Recursively summarizing books with human feedback
J Wu, L Ouyang, DM Ziegler, N Stiennon, R Lowe, J Leike, P Christiano
arXiv preprint arXiv:2109.10862, 2021
1882021
Self-critiquing models for assisting human evaluators
W Saunders, C Yeh, J Wu, S Bills, L Ouyang, J Ward, J Leike
arXiv preprint arXiv:2206.05802, 2022
1142022
Language models can explain neurons in language models
S Bills, N Cammarata, D Mossing, H Tillman, L Gao, G Goh, I Sutskever, ...
URL https://openaipublic. blob. core. windows. net/neuron-explainer/paper …, 2023
942023
Rewon Child, David Luan, Dario Amodei, and Ilya Sutskever. 2018. Language Models are Unsupervised Multitask Learners.(2018)
A Radford, J Wu
542020
Language models are unsupervised multitask learners
J Wu, R Child, D Luan, D Amodei, I Sutskever
OpenAI blog 1 (8), 9, 2019
542019
Scaling laws for neural language models. arXiv 2020
J Kaplan, S McCandlish, T Henighan, TB Brown, B Chess, R Child, ...
arXiv preprint arXiv:2001.08361, 2001
522001
Weak-to-strong generalization: Eliciting strong capabilities with weak supervision
C Burns, P Izmailov, JH Kirchner, B Baker, L Gao, L Aschenbrenner, ...
arXiv preprint arXiv:2312.09390, 2023
422023
Learning to summarize from human feedback, 2020
N Stiennon, L Ouyang, J Wu, DM Ziegler, R Lowe, C Voss, A Radford, ...
URL https://arxiv. org/abs, 2009
82009
The system can't perform the operation now. Try again later.
Articles 1–18