Follow
Ziming Liu
Title
Cited by
Cited by
Year
Hanayo: Harnessing Wave-like Pipeline Parallelism for Enhanced Large Model Training Efficiency
Z Liu, S Cheng, H Zhou, Y You
SC '23: Proceedings of the International Conference for High Performance …, 2023
112023
EnergonAI: An inference system for 10-100 billion parameter transformer models
J Du, Z Liu, J Fang, S Li, Y Li, Y Lu, Y You
arXiv preprint arXiv:2209.02341, 2022
32022
HeteGen: Efficient Heterogeneous Parallel Inference for Large Language Models on Resource-Constrained Devices
Z XUANLEI, B Jia, H Zhou, Z Liu, S Cheng, Y You
MLSys 2024, Proceedings of Machine Learning and Systems 6, 162-172, 2024
12024
AutoChunk: Automated Activation Chunk for Memory-Efficient Long Sequence Inference
X Zhao, S Cheng, G Lu, J Fang, H Zhou, B Jia, Z Liu, Y You
Proceedings of the 12th International Conference on Learning Representations, 2024
12024
ATP: Adaptive Tensor Parallelism for Foundation Models
S Cheng, Z Liu, J Du, Y You
arXiv preprint arXiv:2301.08658, 2023
12023
WallFacer: Guiding Transformer Model Training Out of the Long-Context Dark Forest with N-body Problem
Z Liu, S Wang, S Cheng, Z Zhao, Y Bai, X Zhao, J Demmel, Y You
arXiv preprint arXiv:2407.00611, 2024
2024
DSP: Dynamic Sequence Parallelism for Multi-Dimensional Transformers
X Zhao, S Cheng, Z Zheng, Z Yang, Z Liu, Y You
arXiv preprint arXiv:2403.10266, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–7