Follow
John Tan Chong Min
Title
Cited by
Cited by
Year
Dropnet: Reducing neural network complexity via iterative pruning
CMJ Tan, M Motani
International Conference on Machine Learning, 9356-9366, 2020
462020
S-Cyc: A Learning Rate Schedule for Iterative Pruning of ReLU-based Networks
S Liu, CMJ Tan, M Motani
arXiv preprint arXiv:2110.08764, 2021
22021
Learning, Fast and Slow: A Goal-Directed Memory-Based Approach for Dynamic Environments
JCM Tan, M Motani
2023 IEEE International Conference on Development and Learning (ICDL), 1-6, 2023
12023
Large Language Model (LLM) as a System of Multiple Expert Agents: An Approach to solve the Abstraction and Reasoning Corpus (ARC) Challenge
JCM Tan, M Motani
arXiv preprint arXiv:2310.05146, 2023
12023
An Approach to Solving the Abstraction and Reasoning Corpus (ARC) Challenge
TJC Min
arXiv preprint arXiv:2306.03553, 2023
12023
Brick Tic-Tac-Toe: Exploring the Generalizability of AlphaZero to Novel Test Environments
JTC Min, M Motani
arXiv preprint arXiv:2207.05991, 2022
12022
Using hippocampal replay to consolidate experiences in memory-augmented reinforcement learning
JCM Tan, M Motani
Memory in Artificial and Real Intelligence workshop@ NeurIPS 2022, 2022
12022
Optimizing Learning Rate Schedules for Iterative Pruning of Deep Neural Networks
S Liu, R Ghosh, JTC Min, M Motani
arXiv preprint arXiv:2212.06144, 2022
2022
Go-Explore with a guide: Speeding up search in sparse reward settings with goal-directed intrinsic rewards
CMJ Tan, M Motani
2022
Thursday, November 9, 2023
CMJ Tan, M Motani, S Komura, K Maeyama, A Taniguchi, T Taniguchi, ...
The system can't perform the operation now. Try again later.
Articles 1–10