Follow
Gal Kaplun
Gal Kaplun
PhD Student at Harvard University
Verified email at g.harvard.edu - Homepage
Title
Cited by
Cited by
Year
Deep double descent: Where bigger models and more data hurt
P Nakkiran, G Kaplun, Y Bansal, T Yang, B Barak, I Sutskever
Journal of Statistical Mechanics: Theory and Experiment 2021 (12), 124003, 2021
9132021
Sgd on neural networks learns functions of increasing complexity
P Nakkiran, G Kaplun, D Kalimeris, T Yang, BL Edelman, F Zhang, ...
arXiv preprint arXiv:1905.11604, 2019
216*2019
For self-supervised learning, rationality implies generalization, provably
Y Bansal, G Kaplun, B Barak
arXiv preprint arXiv:2010.08508, 2020
282020
Deconstructing distributions: A pointwise framework of learning
G Kaplun, N Ghosh, S Garg, B Barak, P Nakkiran
arXiv preprint arXiv:2202.09931, 2022
192022
Robust Influence Maximization for Hyperparametric Models
D Kalimeris, G Kaplun, Y Singer
ICML 2019, 2019
182019
Robust neural networks are more interpretable for genomics
PK Koo, S Qian, G Kaplun, V Volf, D Kalimeris
bioRxiv, 657437, 2019
122019
Knowledge distillation: Bad models can be good role models
G Kaplun, E Malach, P Nakkiran, S Shalev-Shwartz
Advances in Neural Information Processing Systems 35, 28683-28694, 2022
92022
For manifold learning, deep neural networks can be locality sensitive hash functions
N Dikkala, G Kaplun, R Panigrahy
arXiv preprint arXiv:2103.06875, 2021
72021
Subtuning: Efficient finetuning for multi-task learning
G Kaplun, A Gurevich, T Swisa, M David, S Shalev-Shwartz, E Malach
arXiv e-prints, arXiv: 2302.06354, 2023
42023
Robustness from simple classifiers
S Qian, D Kalimeris, G Kaplun, Y Singer
arXiv preprint arXiv:2002.09422, 2020
32020
Beyond implicit bias: The insignificance of sgd noise in online learning
N Vyas, D Morwani, R Zhao, G Kaplun, S Kakade, B Barak
arXiv preprint arXiv:2306.08590, 2023
22023
Remote inspection of adversary-controlled environments
J Tobisch, S Philippe, B Barak, G Kaplun, C Zenger, A Glaser, C Paar, ...
Nature communications 14 (1), 6566, 2023
12023
Less is More: Selective Layer Finetuning with SubTuning
G Kaplun, A Gurevich, T Swisa, M David, S Shalev-Shwartz, E Malach
arXiv preprint arXiv:2302.06354, 2023
12023
Implicit Intermediate Supervision for Learning Complex Functions
G Kaplun, N Wies
2023
Corgi^ 2: A Hybrid Offline-Online Approach To Storage-Aware Data Shuffling For SGD
E Livne, G Kaplun, EM Shai
arXiv preprint arXiv:2309.01640, 2023
2023
On Scaling Dynamics in Deep Learning
G Kaplun
Harvard University, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–16