Follow
Amir-massoud Farahmand
Amir-massoud Farahmand
Vector Institute, University of Toronto
Verified email at vectorinstitute.ai - Homepage
Title
Cited by
Cited by
Year
Error propagation for approximate policy and value iteration
A Farahmand, C Szepesvári, R Munos
Advances in Neural Information Processing Systems (NeurIPS), 568-576, 2010
1912010
Regularized Policy Iteration
A Farahmand, M Ghavamzadeh, S Mannor, C Szepesvári
Advances in Neural Information Processing Systems 21 (NeurIPS 2008), 441-448, 2009
1592009
Learning from Limited Demonstrations
B Kim, A Farahmand, J Pineau, D Precup
Advances in Neural Information Processing Systems (NeurIPS), 2859-2867, 2013
1062013
Manifold-adaptive dimension estimation
A Farahmand, C Szepesvári, JY Audibert
Proceedings of the 24th International Conference on Machine Learning (ICML …, 2007
1002007
Regularized fitted Q-iteration for planning in continuous-space Markovian decision problems
A Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
American Control Conference (ACC), 725-730, 2009
89*2009
Regularized policy iteration with nonparametric function spaces
A Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
Journal of Machine Learning Research (JMLR) 17 (1), 4809-4874, 2016
772016
Robust jacobian estimation for uncalibrated visual servoing
A Shademan, A Farahmand, M Jägersand
IEEE International Conference on Robotics and Automation (ICRA), 5564-5569, 2010
702010
Value-aware loss function for model-based reinforcement learning
A Farahmand, A Barreto, D Nikovski
Artificial Intelligence and Statistics (AISTATS), 1486-1494, 2017
672017
Model Selection in Reinforcement Learning
AM Farahmand, C Szepesvári
Machine learning 85 (3), 299-332, 2011
592011
Global visual-motor estimation for uncalibrated visual servoing
A Farahmand, A Shademan, M Jagersand
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS …, 2007
51*2007
Action-Gap Phenomenon in Reinforcement Learning
AM Farahmand
Neural Information Processing Systems (NeurIPS), 2011
492011
Regularization in Reinforcement Learning
AM Farahmand
Department of Computing Science, University of Alberta, 2011
372011
Model-based and model-free reinforcement learning for visual servoing
A Farahmand, A Shademan, M Jagersand, C Szepesvári
IEEE International Conference on Robotics and Automation (ICRA), 2917-2924, 2009
35*2009
Iterative Value-Aware Model Learning
A Farahmand
Advances in Neural Information Processing Systems (NeurIPS), 9072-9083, 2018
332018
Deep reinforcement learning for partial differential equation control
A Farahmand, S Nabi, DN Nikovski
American Control Conference (ACC), 3120-3127, 2017
332017
Approximate MaxEnt Inverse Optimal Control and its Application for Mental Simulation of Human Interactions
DA Huang, AM Farahmand, KM Kitani, JA Bagnell
AAAI Conference on Artificial Intelligence (AAAI), 2015
322015
Attentional network for visual object detection
K Hara, MY Liu, O Tuzel, A Farahmand
arXiv preprint arXiv:1702.01478, 2017
302017
Interaction of Culture-based Learning and Cooperative Co-evolution and its Application to Automatic Behavior-based System Design
AM Farahmand, MN Ahmadabadi, C Lucas, BN Araabi
IEEE Transactions on Evolutionary Computation 14 (1), 23-57, 2010
252010
Regularized fitted Q-iteration: Application to planning
AM Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
Recent Advances in Reinforcement Learning, 55-68, 2008
23*2008
Bellman Error Based Feature Generation using Random Projections on Sparse Spaces
MM Fard, Y Grinberg, A Farahmand, J Pineau, D Precup
Advances in Neural Information Processing Systems (NeurIPS), 3030--3038, 2013
20*2013
The system can't perform the operation now. Try again later.
Articles 1–20