Follow
Honglin Yuan
Title
Cited by
Cited by
Year
A field guide to federated optimization
J Wang, Z Charles, Z Xu, G Joshi, HB McMahan, M Al-Shedivat, G Andrew, ...
arXiv preprint arXiv:2107.06917, 2021
3132021
Federated Accelerated Stochastic Gradient Descent
H Yuan, T Ma
NeurIPS 2020, 2020
1462020
Plex: Towards reliability using pretrained large model extensions
D Tran, J Liu, MW Dusenberry, D Phan, M Collier, J Ren, K Han, Z Wang, ...
BayLearn 2022, 2022
862022
What Do We Mean by Generalization in Federated Learning?
H Yuan, W Morningstar, L Ning, K Singhal
ICLR 2022, 2022
542022
Federated composite optimization
H Yuan, M Zaheer, S Reddi
ICML 2021, 2021
502021
Sharp Bounds for Federated Averaging (Local SGD) and Continuous Perspective
M Glasgow*, H Yuan*, T Ma
AISTATS 2022, 2022
342022
Global Optimization with Orthogonality Constraints via Stochastic Diffusion on Manifold
H Yuan, X Gu, R Lai, Z Wen
Journal of Scientific Computing 80 (2), 1139–1170, 2019
112019
Big-Step-Little-Step: Efficient Gradient Methods for Objectives with Multiple Scales
J Kelner, A Marsden, V Sharan, A Sidford, G Valiant, H Yuan
COLT 2022, 2022
32022
On principled local optimization methods for federated learning
H Yuan
Stanford University, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–9