Lucas Lehnert
Lucas Lehnert
Verified email at - Homepage
Cited by
Cited by
State abstractions for lifelong reinforcement learning
D Abel, D Arumugam, L Lehnert, M Littman
International Conference on Machine Learning, 10-19, 2018
Advantages and limitations of using successor features for transfer in reinforcement learning
L Lehnert, S Tellex, ML Littman
arXiv preprint arXiv:1708.00102, 2017
On Value Function Representation of Long Horizon Problems
L Lehnert, R Laroche, H van Seijen
Thirty-Second AAAI Conference on Artificial Intelligence (AAAI-18), 2018
Reward-predictive representations generalize across tasks in reinforcement learning
L Lehnert, ML Littman, MJ Frank
PLoS computational biology 16 (10), e1008317, 2020
Transfer with model features in reinforcement learning
L Lehnert, ML Littman
arXiv preprint arXiv:1807.01736, 2018
Toward good abstractions for lifelong learning
D Abel, D Arumugam, L Lehnert, ML Littman
Proceedings of the NIPS workshop on hierarchical reinforcement learning, 92, 2017
Successor Features Combine Elements of Model-Free and Model-based Reinforcement Learning.
L Lehnert, ML Littman
J. Mach. Learn. Res. 21, 196:1-196:53, 2020
Successor features support model-based and model-free reinforcement learning
L Lehnert, ML Littman
CoRR abs/1901.11437, 2019
Policy gradient methods for off-policy control
L Lehnert, D Precup
arXiv preprint arXiv:1512.04105, 2015
Mitigating planner overfitting in model-based reinforcement learning
D Arumugam, D Abel, K Asadi, N Gopalan, C Grimm, JK Lee, L Lehnert, ...
arXiv preprint arXiv:1812.01129, 2018
Off-Policy Control Under Changing Behaviour
L Lehnert
McGill University (Canada), 2017
Connection Forms for Beating the Heart
A Mensch, E Piuze, L Lehnert, AJ Bakermans, J Sporring, GJ Strijkers, ...
International Workshop on Statistical Atlases and Computational Models of …, 2014
Using Policy Gradients to Account for Changes in Behaviour Policies under Off-policy Control
L Lehnert, D Precup
Building a Curious Robot for Mapping
L Lehnert, D Precup
The system can't perform the operation now. Try again later.
Articles 1–14