Sadam: A variant of adam for strongly convex functions G Wang, S Lu, W Tu, L Zhang International Conference on Learning Representations, 2019 | 42 | 2019 |
Bandit convex optimization in non-stationary environments P Zhao, G Wang, L Zhang, ZH Zhou Journal of Machine Learning Research 22 (125), 1-45, 2021 | 40 | 2021 |
Optimal algorithms for Lipschitz bandits with heavy-tailed rewards S Lu, G Wang, Y Hu, L Zhang international conference on machine learning, 4154-4163, 2019 | 40 | 2019 |
Momentum accelerates the convergence of stochastic auprc maximization G Wang, M Yang, L Zhang, T Yang International Conference on Artificial Intelligence and Statistics, 3753-3771, 2022 | 22 | 2022 |
Dual adaptivity: A universal algorithm for minimizing the adaptive regret of convex functions L Zhang, G Wang, WW Tu, W Jiang, ZH Zhou Advances in Neural Information Processing Systems 34, 24968-24980, 2021 | 19 | 2021 |
Multi-objective generalized linear bandits S Lu, G Wang, Y Hu, L Zhang arXiv preprint arXiv:1905.12879, 2019 | 16 | 2019 |
Minimizing Adaptive Regret with One Gradient per Iteration. G Wang, D Zhao, L Zhang IJCAI, 2762-2768, 2018 | 16 | 2018 |
Adaptivity and optimality: A universal algorithm for online convex optimization G Wang, S Lu, L Zhang Uncertainty in Artificial Intelligence, 659-668, 2020 | 15 | 2020 |
Nearly optimal regret for stochastic linear bandits with heavy-tailed payoffs B Xue, G Wang, Y Wang, L Zhang arXiv preprint arXiv:2004.13465, 2020 | 14 | 2020 |
A simple yet universal strategy for online convex optimization L Zhang, G Wang, J Yi, T Yang International Conference on Machine Learning, 26605-26623, 2022 | 10 | 2022 |
Stochastic graphical bandits with adversarial corruptions S Lu, G Wang, L Zhang Proceedings of the aaai conference on artificial intelligence 35 (10), 8749-8757, 2021 | 10 | 2021 |
Online convex optimization with continuous switching constraint G Wang, Y Wan, T Yang, L Zhang Advances in Neural Information Processing Systems 34, 28636-28647, 2021 | 9 | 2021 |
Projection-free distributed online learning with strongly convex losses Y Wan, G Wang, L Zhang arXiv preprint arXiv:2103.11102, 2021 | 9* | 2021 |
Adapting to smoothness: A more universal algorithm for online convex optimization G Wang, S Lu, Y Hu, L Zhang Proceedings of the AAAI Conference on Artificial Intelligence 34 (04), 6162-6169, 2020 | 6 | 2020 |
Minimizing dynamic regret on geodesic metric spaces Z Hu, G Wang, JD Abernethy The Thirty Sixth Annual Conference on Learning Theory, 4336-4383, 2023 | 4 | 2023 |
On accelerated perceptrons and beyond G Wang, R Hanashiro, E Guha, J Abernethy arXiv preprint arXiv:2210.09371, 2022 | 3 | 2022 |
Adaptive oracle-efficient online learning G Wang, Z Hu, V Muthukumar, JD Abernethy Advances in Neural Information Processing Systems 35, 23398-23411, 2022 | 2 | 2022 |
Extragradient type methods for Riemannian variational inequality problems Z Hu, G Wang, X Wang, A Wibisono, JD Abernethy, M Tao International Conference on Artificial Intelligence and Statistics, 2080-2088, 2024 | 1 | 2024 |
Faster Margin Maximization Rates for Generic Optimization Methods G Wang, Z Hu, V Muthukumar, JD Abernethy Advances in Neural Information Processing Systems 36, 2024 | 1 | 2024 |
Riemannian Projection-free Online Learning Z Hu, G Wang, JD Abernethy Advances in Neural Information Processing Systems 36, 2024 | | 2024 |