Follow
Luke Metz
Luke Metz
Google Brain
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Unsupervised representation learning with deep convolutional generative adversarial networks
A Radford, L Metz, S Chintala
ICLR 2016, 2015
129822015
Began: Boundary equilibrium generative adversarial networks
D Berthelot, T Schumm, L Metz
arXiv preprint arXiv:1703.10717, 2017
12612017
Unrolled generative adversarial networks
L Metz, B Poole, D Pfau, J Sohl-Dickstein
ICLR 2017, 2016
8972016
Adversarial spheres
J Gilmer, L Metz, F Faghri, SS Schoenholz, M Raghu, M Wattenberg, ...
arXiv preprint arXiv:1801.02774, 2018
3122018
Meta-Learning Update Rules for Unsupervised Representation Learning
L Metz, N Maheswaranathan, B Cheung, J Sohl-Dickstein
ICLR 2019, Oral, 2018
123*2018
Understanding and correcting pathologies in the training of learned optimizers
L Metz, N Maheswaranathan, J Nixon, D Freeman, J Sohl-Dickstein
International Conference on Machine Learning, 4556-4565, 2019
74*2019
Guided evolutionary strategies: Augmenting random search with surrogate gradients
N Maheswaranathan, L Metz, G Tucker, D Choi, J Sohl-Dickstein
International Conference on Machine Learning, 2019
74*2019
Discrete Sequential Prediction of Continuous Actions for Deep RL
L Metz, J Ibarz, N Jaitly, J Davidson
arXiv preprint arXiv:1705.05035, 2017
742017
Towards GAN Benchmarks Which Require Generalization
I Gulrajani, C Raffel, L Metz
ICLR 2019, 2018
442018
On linear identifiability of learned representations
G Roeder, L Metz, D Kingma
International Conference on Machine Learning, 9030-9039, 2021
292021
Learning to predict without looking ahead: World models without forward prediction
CD Freeman, L Metz, D Ha
NeuIPS 2019, 2019
292019
Learning an adaptive learning rate schedule
Z Xu, AM Dai, J Kemp, L Metz
NeuIPS workshop on meta-learning, 2019
272019
Tasks, stability, architecture, and compute: Training more effective learned optimizers, and using them to train themselves
L Metz, N Maheswaranathan, CD Freeman, B Poole, J Sohl-Dickstein
arXiv preprint arXiv:2009.11243, 2020
182020
Gradients are not all you need
L Metz, CD Freeman, SS Schoenholz, T Kachman
arXiv preprint arXiv:2111.05803, 2021
152021
Using a thousand optimization tasks to learn hyperparameter search strategies
L Metz, N Maheswaranathan, R Sun, CD Freeman, B Poole, ...
arXiv preprint arXiv:2002.11887, 2020
152020
Using learned optimizers to make models robust to input noise
L Metz, N Maheswaranathan, J Shlens, J Sohl-Dickstein, ED Cubuk
ICML 2019 Workshop on Uncertainty and Robustness in Deep Learning, 2019
142019
Ridge Rider: Finding Diverse Solutions by Following Eigenvectors of the Hessian
J Parker-Holder, L Metz, C Resnick, H Hu, A Lerer, A Letcher, ...
NeuIPS 2020, 2020
132020
Meta-learning biologically plausible semi-supervised update rules
K Gu, S Greydanus, L Metz, N Maheswaranathan, J Sohl-Dickstein
bioRxiv, 2019
112019
Unbiased Gradient Estimation in Unrolled Computation Graphs with Persistent Evolution Strategies
P Vicol, L Metz, J Sohl-Dickstein
International Conference on Machine Learning, Best Paper, 10553-10563, 2021
92021
Parallel training of deep networks with local updates
M Laskin, L Metz, S Nabarro, M Saroufim, B Noune, C Luschi, ...
arXiv preprint arXiv:2012.03837, 2020
82020
The system can't perform the operation now. Try again later.
Articles 1–20