Authors
Shixiang Gu, Timothy Lillicrap, Ilya Sutskever, Sergey Levine
Publication date
2016/3/2
Journal
arXiv preprint arXiv:1603.00748
Description
Abstract: Model-free reinforcement learning has been successfully applied to a range of
challenging problems, and has recently been extended to handle large neural network
policies and value functions. However, the sample complexity of model-free algorithms,
particularly when using high-dimensional function approximators, tends to limit their
applicability to physical systems. In this paper, we explore algorithms and representations to
reduce the sample complexity of deep reinforcement learning for continuous control tasks. ...
Total citations
20165
Scholar articles
S Gu, T Lillicrap, I Sutskever, S Levine - arXiv preprint arXiv:1603.00748, 2016