Amir-massoud Farahmand
Amir-massoud Farahmand
Vector Institute, University of Toronto
Zweryfikowany adres z vectorinstitute.ai - Strona główna
Tytuł
Cytowane przez
Cytowane przez
Rok
Regularized Policy Iteration
A Farahmand, M Ghavamzadeh, S Mannor, C Szepesvári
Advances in Neural Information Processing Systems 21 (NeurIPS 2008), 441-448, 2009
1592009
Error propagation for approximate policy and value iteration
A Farahmand, C Szepesvári, R Munos
Advances in Neural Information Processing Systems (NeurIPS), 568-576, 2010
1552010
Manifold-adaptive dimension estimation
A Farahmand, C Szepesvári, JY Audibert
Proceedings of the 24th International Conference on Machine Learning (ICML …, 2007
912007
Learning from Limited Demonstrations
B Kim, A Farahmand, J Pineau, D Precup
Advances in Neural Information Processing Systems (NeurIPS), 2859-2867, 2013
902013
Regularized fitted Q-iteration for planning in continuous-space Markovian decision problems
A Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
American Control Conference (ACC), 725-730, 2009
87*2009
Robust jacobian estimation for uncalibrated visual servoing
A Shademan, A Farahmand, M Jägersand
IEEE International Conference on Robotics and Automation (ICRA), 5564-5569, 2010
682010
Regularized policy iteration with nonparametric function spaces
A Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
Journal of Machine Learning Research (JMLR) 17 (1), 4809-4874, 2016
562016
Global visual-motor estimation for uncalibrated visual servoing
A Farahmand, A Shademan, M Jagersand
IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS …, 2007
52*2007
Model Selection in Reinforcement Learning
AM Farahmand, C Szepesvári
Machine learning 85 (3), 299-332, 2011
472011
Value-aware loss function for model-based reinforcement learning
A Farahmand, A Barreto, D Nikovski
Artificial Intelligence and Statistics (AISTATS), 1486-1494, 2017
342017
Action-Gap Phenomenon in Reinforcement Learning
AM Farahmand
Neural Information Processing Systems (NeurIPS), 2011
332011
Regularization in Reinforcement Learning
AM Farahmand
Department of Computing Science, University of Alberta, 2011
332011
Approximate MaxEnt Inverse Optimal Control and its Application for Mental Simulation of Human Interactions
DA Huang, AM Farahmand, KM Kitani, JA Bagnell
AAAI Conference on Artificial Intelligence (AAAI), 2015
302015
Model-based and model-free reinforcement learning for visual servoing
A Farahmand, A Shademan, M Jagersand, C Szepesvári
IEEE International Conference on Robotics and Automation (ICRA), 2917-2924, 2009
28*2009
Deep reinforcement learning for partial differential equation control
A Farahmand, S Nabi, DN Nikovski
American Control Conference (ACC), 3120-3127, 2017
262017
Attentional network for visual object detection
K Hara, MY Liu, O Tuzel, A Farahmand
arXiv preprint arXiv:1702.01478, 2017
262017
Regularized Fitted Q-iteration: Application to Bounded Resource Planning
A Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
26*2008
Regularized fitted Q-iteration: Application to planning
AM Farahmand, M Ghavamzadeh, C Szepesvári, S Mannor
Recent Advances in Reinforcement Learning, 55-68, 2008
26*2008
Interaction of Culture-based Learning and Cooperative Co-evolution and its Application to Automatic Behavior-based System Design
AM Farahmand, MN Ahmadabadi, C Lucas, BN Araabi
IEEE Transactions on Evolutionary Computation 14 (1), 23-57, 2010
222010
Iterative Value-Aware Model Learning
A Farahmand
Advances in Neural Information Processing Systems (NeurIPS), 9072-9083, 2018
212018
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20