Generalized proximal policy optimization with sample reuse J Queeney, Y Paschalidis, CG Cassandras Advances in Neural Information Processing Systems 34, 11909-11919, 2021 | 33 | 2021 |
Uncertainty-aware policy optimization: A robust, adaptive trust region approach J Queeney, IC Paschalidis, CG Cassandras Proceedings of the AAAI Conference on Artificial Intelligence 35 (11), 9377-9385, 2021 | 6 | 2021 |
Risk-averse model uncertainty for distributionally robust safe reinforcement learning J Queeney, M Benosman Advances in Neural Information Processing Systems 36, 2024 | 3 | 2024 |
Opportunities and challenges from using animal videos in reinforcement learning for navigation V Giammarino, J Queeney, LC Carstensen, ME Hasselmo, IC Paschalidis IFAC-PapersOnLine 56 (2), 9056-9061, 2023 | 3 | 2023 |
Optimal transport perturbations for safe reinforcement learning with robustness guarantees J Queeney, EC Ozcan, IC Paschalidis, CG Cassandras arXiv preprint arXiv:2301.13375, 2023 | 2 | 2023 |
Generalized policy improvement algorithms with theoretically supported sample reuse J Queeney, IC Paschalidis, CG Cassandras arXiv preprint arXiv:2206.13714, 2022 | 2 | 2022 |
Adversarial imitation learning from visual observations using latent information V Giammarino, J Queeney, IC Paschalidis arXiv preprint arXiv:2309.17371, 2023 | 1 | 2023 |
A Model-Based Approach for Improving Reinforcement Learning Efficiency Leveraging Expert Observations EC Ozcan, V Giammarino, J Queeney, IC Paschalidis arXiv preprint arXiv:2402.18836, 2024 | | 2024 |
Reliable deep reinforcement learning: stable training and robust deployment J Queeney Boston University, 2023 | | 2023 |