Praneeth Netrapalli
Praneeth Netrapalli
Microsoft Research
Verified email at microsoft.com - Homepage
TitleCited byYear
Low-rank matrix completion using alternating minimization
P Jain, P Netrapalli, S Sanghavi
Proceedings of the forty-fifth annual ACM symposium on Theory of computing …, 2013
6002013
Phase retrieval using alternating minimization
P Netrapalli, P Jain, S Sanghavi
Advances in Neural Information Processing Systems, 2796-2804, 2013
3532013
How to escape saddle points efficiently
C Jin, R Ge, P Netrapalli, SM Kakade, MI Jordan
Proceedings of the 34th International Conference on Machine Learning-Volume …, 2017
1832017
Non-convex robust PCA
P Netrapalli, UN Niranjan, S Sanghavi, A Anandkumar, P Jain
Advances in Neural Information Processing Systems, 1107-1115, 2014
1672014
Learning the graph of epidemic cascades
P Netrapalli, S Sanghavi
ACM SIGMETRICS Performance Evaluation Review 40 (1), 211-222, 2012
1262012
Learning sparsely used overcomplete dictionaries via alternating minimization
A Agarwal, A Anandkumar, P Jain, P Netrapalli
SIAM Journal on Optimization 26 (4), 2775-2799, 2016
1162016
Faster Eigenvector Computation via Shift-and-Invert Preconditioning.
D Garber, E Hazan, C Jin, SM Kakade, C Musco, P Netrapalli, A Sidford
ICML, 2626-2634, 2016
74*2016
Learning sparsely used overcomplete dictionaries
A Agarwal, A Anandkumar, P Jain, P Netrapalli, R Tandon
722014
Information-theoretic thresholds for community detection in sparse networks
J Banks, C Moore, J Neeman, P Netrapalli
Conference on Learning Theory, 383-416, 2016
66*2016
Streaming PCA: Matching matrix Bernstein and near-optimal finite sample guarantees for Oja’s algorithm
P Jain, C Jin, SM Kakade, P Netrapalli, A Sidford
Conference on Learning Theory, 1147-1164, 2016
63*2016
A clustering approach to learning sparsely used overcomplete dictionaries
A Agarwal, A Anandkumar, P Netrapalli
IEEE Transactions on Information Theory 63 (1), 575-592, 2017
62*2017
Fast exact matrix completion with finite samples
P Jain, P Netrapalli
Conference on Learning Theory, 1007-1034, 2015
572015
Accelerated gradient descent escapes saddle points faster than gradient descent
C Jin, P Netrapalli, MI Jordan
arXiv preprint arXiv:1711.10456, 2017
542017
Greedy learning of Markov network structure
P Netrapalli, S Banerjee, S Sanghavi, S Shakkottai
2010 48th Annual Allerton Conference on Communication, Control, and …, 2010
512010
One-bit compressed sensing: Provable support and vector recovery
S Gopi, P Netrapalli, P Jain, A Nori
International Conference on Machine Learning, 154-162, 2013
482013
Provable efficient online matrix completion via non-convex stochastic gradient descent
C Jin, SM Kakade, P Netrapalli
Advances in Neural Information Processing Systems, 4520-4528, 2016
422016
Parallelizing stochastic gradient descent for least squares regression: mini-batching, averaging, and model misspecification
P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford
30*2018
Convergence rates of active learning for maximum likelihood estimation
SS K Chaudhuri, SM Kakade, P Netrapalli
Proceedings of the 28th International Conference on Neural Information …, 2015
30*2015
Accelerating stochastic gradient descent for least squares regression
P Jain, SM Kakade, R Kidambi, P Netrapalli, A Sidford
Conference On Learning Theory, 545-604, 2018
28*2018
Efficient algorithms for large-scale generalized eigenvector computation and canonical correlation analysis
R Ge, C Jin, P Netrapalli, A Sidford
International Conference on Machine Learning, 2741-2750, 2016
282016
The system can't perform the operation now. Try again later.
Articles 1–20