Matus Telgarsky
Matus Telgarsky
Verified email at illinois.edu - Homepage
TitleCited byYear
Tensor decompositions for learning latent variable models
A Anandkumar, R Ge, D Hsu, SM Kakade, M Telgarsky
The Journal of Machine Learning Research 15 (1), 2773-2832, 2014
6942014
Spectrally-normalized margin bounds for neural networks
PL Bartlett, DJ Foster, MJ Telgarsky
Advances in Neural Information Processing Systems, 6240-6249, 2017
2232017
Benefits of depth in neural networks
M Telgarsky
arXiv preprint arXiv:1602.04485, 2016
1892016
Non-convex learning via stochastic gradient langevin dynamics: a nonasymptotic analysis
M Raginsky, A Rakhlin, M Telgarsky
arXiv preprint arXiv:1702.03849, 2017
1022017
Representation benefits of deep feedforward networks
M Telgarsky
arXiv preprint arXiv:1509.08101, 2015
762015
Hartigan’s method: k-means clustering without voronoi
M Telgarsky, A Vattani
Proceedings of the Thirteenth International Conference on Artificial …, 2010
682010
Risk and parameter convergence of logistic regression
Z Ji, M Telgarsky
arXiv preprint arXiv:1803.07300, 2018
262018
Agglomerative bregman clustering
M Telgarsky, S Dasgupta
arXiv preprint arXiv:1206.6446, 2012
262012
Neural networks and rational functions
M Telgarsky
Proceedings of the 34th International Conference on Machine Learning-Volume …, 2017
252017
A primal-dual convergence analysis of boosting
M Telgarsky
Journal of Machine Learning Research 13 (Mar), 561-606, 2012
222012
Gradient descent aligns the layers of deep linear networks
Z Ji, M Telgarsky
arXiv preprint arXiv:1810.02032, 2018
202018
Margins, shrinkage, and boosting
M Telgarsky
arXiv preprint arXiv:1303.4172, 2013
162013
Tensor decompositions for learning latent variable models (A survey for ALT)
A Anandkumar, R Ge, D Hsu, SM Kakade, M Telgarsky
International Conference on Algorithmic Learning Theory, 19-38, 2015
122015
Moment-based Uniform Deviation Bounds for -means and Friends
MJ Telgarsky, S Dasgupta
Advances in Neural Information Processing Systems, 2940-2948, 2013
122013
Dirichlet draws are sparse with high probability
M Telgarsky
arXiv preprint arXiv:1301.4917, 2013
82013
Convex risk minimization and conditional probability estimation
M Telgarsky, M Dudik, R Schapire
arXiv preprint arXiv:1506.04513, 2015
62015
Greedy bi-criteria approximations for -medians and -means
D Hsu, M Telgarsky
arXiv preprint arXiv:1607.06203, 2016
52016
Boosting with the logistic loss is consistent
M Telgarsky
arXiv preprint arXiv:1305.2648, 2013
52013
Tensor decompositions for learning latent variable models (A survey for ALT)
A Anandkumar, R Ge, DJ Hsu, SM Kakade, M Telgarsky
International Conference on Algorithmic Learning Theory, 19-38, 0
4
Scalable non-linear learning with adaptive polynomial expansions
A Agarwal, A Beygelzimer, DJ Hsu, J Langford, MJ Telgarsky
Advances in Neural Information Processing Systems, 2051-2059, 2014
32014
The system can't perform the operation now. Try again later.
Articles 1–20