Follow
Guanghui Wang
Title
Cited by
Cited by
Year
Sadam: A variant of adam for strongly convex functions
G Wang, S Lu, W Tu, L Zhang
International Conference on Learning Representations, 2019
422019
Bandit convex optimization in non-stationary environments
P Zhao, G Wang, L Zhang, ZH Zhou
Journal of Machine Learning Research 22 (125), 1-45, 2021
392021
Optimal algorithms for Lipschitz bandits with heavy-tailed rewards
S Lu, G Wang, Y Hu, L Zhang
international conference on machine learning, 4154-4163, 2019
392019
Momentum accelerates the convergence of stochastic auprc maximization
G Wang, M Yang, L Zhang, T Yang
International Conference on Artificial Intelligence and Statistics, 3753-3771, 2022
222022
Dual adaptivity: A universal algorithm for minimizing the adaptive regret of convex functions
L Zhang, G Wang, WW Tu, W Jiang, ZH Zhou
Advances in Neural Information Processing Systems 34, 24968-24980, 2021
192021
Multi-objective generalized linear bandits
S Lu, G Wang, Y Hu, L Zhang
arXiv preprint arXiv:1905.12879, 2019
162019
Minimizing Adaptive Regret with One Gradient per Iteration.
G Wang, D Zhao, L Zhang
IJCAI, 2762-2768, 2018
162018
Adaptivity and optimality: A universal algorithm for online convex optimization
G Wang, S Lu, L Zhang
Uncertainty in Artificial Intelligence, 659-668, 2020
142020
Nearly optimal regret for stochastic linear bandits with heavy-tailed payoffs
B Xue, G Wang, Y Wang, L Zhang
arXiv preprint arXiv:2004.13465, 2020
122020
Stochastic graphical bandits with adversarial corruptions
S Lu, G Wang, L Zhang
Proceedings of the aaai conference on artificial intelligence 35 (10), 8749-8757, 2021
102021
A simple yet universal strategy for online convex optimization
L Zhang, G Wang, J Yi, T Yang
International Conference on Machine Learning, 26605-26623, 2022
92022
Online convex optimization with continuous switching constraint
G Wang, Y Wan, T Yang, L Zhang
Advances in Neural Information Processing Systems 34, 28636-28647, 2021
92021
Projection-free distributed online learning with strongly convex losses
Y Wan, G Wang, L Zhang
arXiv preprint arXiv:2103.11102, 2021
8*2021
Adapting to smoothness: A more universal algorithm for online convex optimization
G Wang, S Lu, Y Hu, L Zhang
Proceedings of the AAAI Conference on Artificial Intelligence 34 (04), 6162-6169, 2020
62020
Minimizing dynamic regret on geodesic metric spaces
Z Hu, G Wang, JD Abernethy
The Thirty Sixth Annual Conference on Learning Theory, 4336-4383, 2023
32023
On accelerated perceptrons and beyond
G Wang, R Hanashiro, E Guha, J Abernethy
arXiv preprint arXiv:2210.09371, 2022
32022
Adaptive oracle-efficient online learning
G Wang, Z Hu, V Muthukumar, JD Abernethy
Advances in Neural Information Processing Systems 35, 23398-23411, 2022
22022
Faster Margin Maximization Rates for Generic Optimization Methods
G Wang, Z Hu, V Muthukumar, JD Abernethy
Advances in Neural Information Processing Systems 36, 2024
12024
Extragradient Type Methods for Riemannian Variational Inequality Problems
Z Hu, G Wang, X Wang, A Wibisono, J Abernethy, M Tao
arXiv preprint arXiv:2309.14155, 2023
12023
Riemannian Projection-free Online Learning
Z Hu, G Wang, JD Abernethy
Advances in Neural Information Processing Systems 36, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–20