Follow
Xiaoxia (Shirley) Wu 吴晓霞
Xiaoxia (Shirley) Wu 吴晓霞
Other namesXiaoxia Wu
DeepSpeed Team @ Microsoft
Verified email at microsoft.com - Homepage
Title
Cited by
Cited by
Year
Adagrad stepsizes: Sharp convergence over nonconvex landscapes
R Ward, X Wu, L Bottou
Journal of Machine Learning Research 21 (219), 1-30, 2020
2992020
Zeroquant: Efficient and affordable post-training quantization for large-scale transformers
Z Yao, R Yazdani Aminabadi, M Zhang, X Wu, C Li, Y He
Advances in Neural Information Processing Systems 35, 27168-27183, 2022
1772022
When do curricula work?
X Wu, E Dyer, B Neyshabur
arXiv preprint arXiv:2012.03107, 2020
1182020
Adagrad stepsizes: Sharp convergence over nonconvex landscapes, from any initialization
R Ward, X Wu, L Bottou
arXiv preprint arXiv:1806.01811, 2018
882018
Wngrad: Learn the learning rate in gradient descent
X Wu, R Ward, L Bottou
arXiv preprint arXiv:1803.02865, 2018
822018
Global convergence of adaptive gradient methods for an over-parameterized neural network
X Wu, SS Du, R Ward
arXiv preprint arXiv:1902.07111, 2019
652019
Hierarchical learning for generation with long source sequences
T Rohde, X Wu, Y Liu
arXiv preprint arXiv:2104.07545, 2021
542021
Linear convergence of adaptive stochastic gradient descent
Y Xie, X Wu, R Ward
International conference on artificial intelligence and statistics, 1475-1485, 2020
482020
Choosing the sample with lowest loss makes sgd robust
V Shah, X Wu, S Sanghavi
International Conference on Artificial Intelligence and Statistics, 2120-2130, 2020
422020
Zeroquant-v2: Exploring post-training quantization in llms from comprehensive study to low rank compensation
Z Yao, X Wu, C Li, S Youn, Y He
arXiv preprint arXiv:2303.08302, 2023
40*2023
Value-at-Risk estimation with stochastic interest rate models for option-bond portfolios
X Wang, D Xie, J Jiang, X Wu, J He
Finance Research Letters 21, 10-20, 2017
282017
Deepspeed-chat: Easy, fast and affordable rlhf training of chatgpt-like models at all scales
Z Yao, RY Aminabadi, O Ruwase, S Rajbhandari, X Wu, AA Awan, ...
arXiv preprint arXiv:2308.01320, 2023
252023
Implicit regularization and convergence for weight normalization
X Wu, E Dobriban, T Ren, S Wu, Z Li, S Gunasekar, R Ward, Q Liu
Advances in Neural Information Processing Systems 33, 2835-2847, 2020
24*2020
Understanding int4 quantization for transformer models: Latency speedup, composability, and failure cases
X Wu, C Li, RY Aminabadi, Z Yao, Y He
arXiv preprint arXiv:2301.12017, 2023
21*2023
Zeroquant-fp: A leap forward in llms post-training w4a8 quantization using floating-point formats
X Wu, Z Yao, Y He
arXiv preprint arXiv:2307.09782, 2023
182023
Xtc: Extreme compression for pre-trained transformers made simple and efficient
X Wu, Z Yao, M Zhang, C Li, Y He
Advances in Neural Information Processing Systems 35, 3217-3231, 2022
172022
Mlpruning: A multilevel structured pruning framework for transformer-based models
Z Yao, L Ma, S Shen, K Keutzer, MW Mahoney
arXiv preprint arXiv:2105.14636, 2021
132021
Random-ltd: Random and layerwise token dropping brings efficient training for large-scale transformers
Z Yao, X Wu, C Li, C Holmes, M Zhang, C Li, Y He
arXiv preprint arXiv:2211.11586, 2022
112022
Zero++: Extremely efficient collective communication for giant model training
G Wang, H Qin, SA Jacobs, C Holmes, S Rajbhandari, O Ruwase, F Yan, ...
arXiv preprint arXiv:2306.10209, 2023
82023
Adaloss: A computationally-efficient and provably convergent adaptive gradient method
X Wu, Y Xie, SS Du, R Ward
Proceedings of the AAAI Conference on Artificial Intelligence 36 (8), 8691-8699, 2022
82022
The system can't perform the operation now. Try again later.
Articles 1–20