关注
Changmin Yu
Changmin Yu
在 ucl.ac.uk 的电子邮件经过验证
标题
引用次数
引用次数
年份
What about inputting policy in value function: Policy representation and policy-extended value function approximator
H Tang, Z Meng, J Hao, C Chen, D Graves, D Li, C Yu, H Mao, W Liu, ...
Proceedings of the AAAI Conference on Artificial Intelligence 36 (8), 8441-8449, 2022
20*2022
Deep kernel learning approach to engine emissions modeling
C Yu, M Seslija, G Brownbridge, S Mosbach, M Kraft, M Parsi, M Davis, ...
Data-Centric Engineering 1, 2020
182020
Learning State Representations via Retracing in Reinforcement Learning
C Yu, D Li, J Hao, J Wang, N Burgess
arXiv preprint arXiv:2111.12600, 2021
92021
Prediction and Generalisation over Directed Actions by Grid Cells
C Yu, TEJ Behrens, N Burgess
arXiv preprint arXiv:2006.03355, 2020
6*2020
DESTA: A Framework for Safe Reinforcement Learning with Markov Games of Intervention
D Mguni, J Jennings, T Jafferjee, A Sootla, Y Yang, C Yu, U Islam, Z Wang, ...
arXiv preprint arXiv:2110.14468, 2021
52021
Structured Recognition for Generative Models with Explaining Away
C Yu, H Soulat, N Burgess, M Sahani
Advances in Neural Information Processing Systems, 2022
4*2022
Unsupervised representation learning with recognition-parametrised probabilistic models
WI Walker, H Soulat, C Yu, M Sahani
International Conference on Artificial Intelligence and Statistics, 4209-4230, 2023
22023
Successor-Predecessor Intrinsic Exploration
C Yu, N Burgess, M Sahani, SJ Gershman
Advances in Neural Information Processing Systems 36, 2024
12024
SEREN: Knowing When to Explore and When to Exploit
C Yu, D Mguni, D Li, A Sootla, J Wang, N Burgess
arXiv preprint arXiv:2205.15064, 2022
12022
Leveraging Episodic Memory to Improve World Models for Reinforcement Learning
J Coda-Forno, C Yu, Q Guo, Z Fountas, N Burgess
系统目前无法执行此操作,请稍后再试。
文章 1–10