Hyperband: A novel bandit-based approach to hyperparameter optimization L Li, K Jamieson, G DeSalvo, A Rostamizadeh, A Talwalkar Journal of Machine Learning Research 18 (185), 1-52, 2018 | 3042 | 2018 |
Random search and reproducibility for neural architecture search L Li, A Talwalkar Uncertainty in artificial intelligence, 367-377, 2020 | 837 | 2020 |
A system for massively parallel hyperparameter tuning L Li, K Jamieson, A Rostamizadeh, E Gonina, J Ben-Tzur, M Hardt, ... Proceedings of Machine Learning and Systems 2, 230-246, 2020 | 470 | 2020 |
Hyperband: Bandit-Based Configuration Evaluation for Hyperparameter Optimization AT Liam Li, Kevin Jamieson, Giulia DeSalvo, Afshin Rostamizadeh ICLR, 2017 | 187* | 2017 |
Massively parallel hyperparameter tuning L Li, K Jamieson, A Rostamizadeh, K Gonina, M Hardt, B Recht, ... | 185 | 2018 |
Federated hyperparameter tuning: Challenges, baselines, and connections to weight-sharing M Khodak, R Tu, T Li, L Li, MFF Balcan, V Smith, A Talwalkar Advances in Neural Information Processing Systems 34, 19184-19197, 2021 | 83 | 2021 |
Geometry-aware gradient algorithms for neural architecture search L Li, M Khodak, MF Balcan, A Talwalkar arXiv preprint arXiv:2004.07802, 2020 | 81 | 2020 |
Cross-modal fine-tuning: Align then refine J Shen, L Li, LM Dery, C Staten, M Khodak, G Neubig, A Talwalkar International Conference on Machine Learning, 31030-31056, 2023 | 38 | 2023 |
On data efficiency of meta-learning M Al-Shedivat, L Li, E Xing, A Talwalkar International Conference on Artificial Intelligence and Statistics, 1369-1377, 2021 | 32 | 2021 |
Rethinking neural operations for diverse tasks N Roberts, M Khodak, T Dao, L Li, C Ré, A Talwalkar Advances in Neural Information Processing Systems 34, 15855-15869, 2021 | 29 | 2021 |
Weight sharing for hyperparameter optimization in federated learning M Khodak, T Li, L Li, M Balcan, V Smith, A Talwalkar Int. Workshop on Federated Learning for User Privacy and Data …, 2020 | 16 | 2020 |
A system for massively parallel hyperparameter tuning. arXiv 2018 L Li, K Jamieson, A Rostamizadeh, E Gonina, M Hardt, B Recht, ... arXiv preprint arXiv:1810.05934, 0 | 14 | |
Massively parallel hyperparameter tuning, 2018 L Li, K Jamieson, A Rostamizadeh, K Gonina, M Hardt, B Recht, ... URL https://openreview. net/forum, 2018 | 10 | 2018 |
Exploiting reuse in pipeline-aware hyperparameter tuning L Li, E Sparks, K Jamieson, A Talwalkar arXiv preprint arXiv:1903.05176, 2019 | 9 | 2019 |
Random Search and Reproducibility for Neural Architecture Search. arXiv e-prints, art L Li, A Talwalkar arXiv preprint arXiv:1902.07638, 2019 | 5 | 2019 |
Learning operations for neural PDE solvers N Roberts, M Khodak, T Dao, L Li, C Ré, A Talwalkar Proc. ICLR SimDL Workshop, 2021 | 3 | 2021 |
A simple setting for understanding neural architecture search with weight-sharing M Khodak, L Li, N Roberts, MF Balcan, A Talwalkar ICML AutoML Workshop, 2020 | 3 | 2020 |
Towards Efficient Automated Machine Learning L Li Google, 2020 | 2 | 2020 |
Weight-sharing beyond neural architecture search: Efficient feature map selection and federated hyperparameter tuning M Khodak, L Li, N Roberts, MF Balcan, A Talwalkar Proc. 2nd SysML Conf., 2019 | 2 | 2019 |
On weight-sharing and bilevel optimization in architecture search M Khodak, L Li, MF Balcan, A Talwalkar | 1 | |