Follow
Peter Richtarik
Peter Richtarik
Professor, KAUST
Verified email at kaust.edu.sa - Homepage
Title
Cited by
Cited by
Year
Federated learning: Strategies for improving communication efficiency
J Konecný, HB McMahan, FX Yu, P Richtárik, AT Suresh, D Bacon
arXiv preprint arXiv:1610.05492 8, 2016
36972016
Federated learning: Strategies for improving communication efficiency
J Konečný, HB McMahan, FX Yu, P Richtárik, AT Suresh, D Bacon
arXiv preprint arXiv:1610.05492, 2016
26032016
Federated optimization: Distributed machine learning for on-device intelligence
J Konečný, HB McMahan, D Ramage, P Richtárik
arXiv preprint arXiv:1610.02527, 2016
20042016
Iteration complexity of randomized block-coordinate descent methods for minimizing a composite function
P Richtarik, M Takáč
Mathematical Programming 144 (2), 1-38, 2014
8522014
Generalized power method for sparse principal component analysis
M Journee, Y Nesterov, P Richtárik, R Sepulchre
Journal of Machine Learning Research 11, 517-553, 2010
7292010
Parallel coordinate descent methods for big data optimization
P Richtárik, M Takáč
Mathematical Programming 156 (1), 433-484, 2016
5332016
Tighter theory for local SGD on identical and heterogeneous data
A Khaled, K Mishchenko, P Richtárik
The 23rd International Conference on Artificial Intelligence and Statistics, 2020
4082020
Accelerated, parallel and proximal coordinate descent
O Fercoq, P Richtárik
SIAM Journal on Optimization 25 (4), 1997-2023, 2015
4082015
SGD: General Analysis and Improved Rates
RM Gower, N Loizou, X Qian, A Sailanbayev, E Shulgin, P Richtarik
ICML 2019, 2019
3992019
Scaling distributed machine learning with {In-Network} aggregation
A Sapio, M Canini, CY Ho, J Nelson, P Kalnis, C Kim, A Krishnamurthy, ...
18th USENIX Symposium on Networked Systems Design and Implementation (NSDI …, 2021
3692021
Federated learning of a mixture of global and local models
F Hanzely, P Richtárik
arXiv preprint arXiv:2002.05516, 2020
3602020
Mini-batch semi-stochastic gradient descent in the proximal setting
J Konečný, J Liu, P Richtárik, M Takáč
IEEE Journal of Selected Topics in Signal Processing 10 (2), 242-255, 2016
3232016
A field guide to federated optimization
J Wang, Z Charles, Z Xu, G Joshi, HB McMahan, M Al-Shedivat, G Andrew, ...
arXiv preprint arXiv:2107.06917, 2021
3062021
Randomized iterative methods for linear systems
RM Gower, P Richtárik
SIAM Journal on Matrix Analysis and Applications 36 (4), 1660-1690, 2015
3042015
Semi-stochastic gradient descent methods
J Konečný, P Richtárik
Frontiers in Applied Mathematics and Statistics 3:9, 2017
266*2017
Distributed coordinate descent method for learning with big data
P Richtárik, M Takáč
Journal of Machine Learning Research 17 (75), 1-25, 2016
2642016
SGD and Hogwild! Convergence Without the Bounded Gradients Assumption
LM Nguyen, PH Nguyen, M van Dijk, P Richtárik, K Scheinberg, M Takáč
Proceedings of the 35th Int. Conf. on Machine Learning, PMLR 80, 3750-3758, 2018
2242018
Distributed optimization with arbitrary local solvers
C Ma, J Konečný, M Jaggi, V Smith, MI Jordan, P Richtárik, M Takáč
Optimization Methods and Software 32 (4), 813-848, 2017
2162017
Adding vs. averaging in distributed primal-dual optimization
C Ma, V Smith, M Jaggi, MI Jordan, P Richtárik, M Takáč
Proceedings of the 32nd Int. Conf. on Machine Learning, PMLR 37, 1973-1982, 2015
2102015
Distributed learning with compressed gradient differences
K Mishchenko, E Gorbunov, M Takáč, P Richtárik
arXiv preprint arXiv:1901.09269, 2019
2092019
The system can't perform the operation now. Try again later.
Articles 1–20