Low-rank matrix completion using alternating minimization P Jain, P Netrapalli, S Sanghavi Proceedings of the forty-fifth annual ACM symposium on Theory of computing …, 2013 | 893 | 2013 |
Phase retrieval using alternating minimization P Netrapalli, P Jain, S Sanghavi IEEE Transactions on Signal Processing 63 (18), 4814-4826, 2015 | 515 | 2015 |
How to escape saddle points efficiently C Jin, R Ge, P Netrapalli, SM Kakade, MI Jordan arXiv preprint arXiv:1703.00887, 2017 | 461 | 2017 |
Non-convex robust PCA P Netrapalli, N UN, S Sanghavi, A Anandkumar, P Jain Advances in Neural Information Processing Systems 27, 1107-1115, 2014 | 258 | 2014 |
Learning the graph of epidemic cascades P Netrapalli, S Sanghavi ACM SIGMETRICS Performance Evaluation Review 40 (1), 211-222, 2012 | 177 | 2012 |
Learning sparsely used overcomplete dictionaries via alternating minimization A Agarwal, A Anandkumar, P Jain, P Netrapalli SIAM Journal on Optimization 26 (4), 2775-2799, 2016 | 154 | 2016 |
Accelerated gradient descent escapes saddle points faster than gradient descent C Jin, P Netrapalli, MI Jordan Conference On Learning Theory, 1042-1085, 2018 | 149 | 2018 |
Streaming PCA: Matching matrix Bernstein and near-optimal finite sample guarantees for Oja’s algorithm P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford Conference on learning theory, 1147-1164, 2016 | 108* | 2016 |
Learning sparsely used overcomplete dictionaries A Agarwal, A Anandkumar, P Jain, P Netrapalli, R Tandon Conference on Learning Theory, 123-137, 2014 | 106 | 2014 |
Information-theoretic thresholds for community detection in sparse networks J Banks, C Moore, J Neeman, P Netrapalli Conference on Learning Theory, 383-416, 2016 | 103* | 2016 |
Faster eigenvector computation via shift-and-invert preconditioning D Garber, E Hazan, C Jin, C Musco, P Netrapalli, A Sidford International Conference on Machine Learning, 2626-2634, 2016 | 98* | 2016 |
What is local optimality in nonconvex-nonconcave minimax optimization? C Jin, P Netrapalli, M Jordan International Conference on Machine Learning, 4880-4889, 2020 | 89* | 2020 |
Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification P Jain, P Netrapalli, SM Kakade, R Kidambi, A Sidford The Journal of Machine Learning Research 18 (1), 8258-8299, 2017 | 89* | 2017 |
Accelerating stochastic gradient descent for least squares regression P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford Conference On Learning Theory, 545-604, 2018 | 85* | 2018 |
A clustering approach to learning sparsely used overcomplete dictionaries A Agarwal, A Anandkumar, P Netrapalli IEEE Transactions on Information Theory 63 (1), 575-592, 2016 | 84* | 2016 |
Fast exact matrix completion with finite samples P Jain, P Netrapalli Conference on Learning Theory, 1007-1034, 2015 | 80 | 2015 |
Provable efficient online matrix completion via non-convex stochastic gradient descent C Jin, SM Kakade, P Netrapalli arXiv preprint arXiv:1605.08370, 2016 | 76 | 2016 |
One-bit compressed sensing: Provable support and vector recovery S Gopi, P Netrapalli, P Jain, A Nori International Conference on Machine Learning, 154-162, 2013 | 69 | 2013 |
Efficient algorithms for large-scale generalized eigenvector computation and canonical correlation analysis R Ge, C Jin, P Netrapalli, A Sidford International Conference on Machine Learning, 2741-2750, 2016 | 54 | 2016 |
On the insufficiency of existing momentum schemes for stochastic optimization R Kidambi, P Netrapalli, P Jain, S Kakade 2018 Information Theory and Applications Workshop (ITA), 1-9, 2018 | 53 | 2018 |