Learning efficient vision transformers via fine-grained manifold distillation Z Hao, J Guo, D Jia, K Han, Y Tang, C Zhang, H Hu, Y Wang Advances in Neural Information Processing Systems, 2022 | 65* | 2022 |
Cdfkd-mfs: Collaborative data-free knowledge distillation via multi-level feature sharing Z Hao, Y Luo, Z Wang, H Hu, J An IEEE Transactions on Multimedia 24, 4262-4274, 2022 | 14 | 2022 |
Multi-agent collaborative inference via dnn decoupling: Intermediate feature compression and edge learning Z Hao, G Xu, Y Luo, H Hu, J An, S Mao IEEE Transactions on Mobile Computing, 2022 | 14 | 2022 |
Data-free ensemble knowledge distillation for privacy-conscious multimedia model compression Z Hao, Y Luo, H Hu, J An, Y Wen Proceedings of the 29th ACM International Conference on Multimedia, 1803-1811, 2021 | 12 | 2021 |
Revisit the Power of Vanilla Knowledge Distillation: from Small Scale to Large Scale Z Hao, J Guo, K Han, H Hu, C Xu, Y Wang Thirty-seventh Conference on Neural Information Processing Systems, 2023 | 7* | 2023 |
Model compression via collaborative data-free knowledge distillation for edge intelligence Z Hao, Y Luo, Z Wang, H Hu, J An 2021 IEEE International Conference on Multimedia and Expo (ICME), 1-6, 2021 | 5 | 2021 |
One-for-All: Bridge the Gap Between Heterogeneous Architectures in Knowledge Distillation Z Hao, J Guo, K Han, Y Tang, H Hu, Y Wang, C Xu Thirty-seventh Conference on Neural Information Processing Systems, 2023 | 4 | 2023 |
DeViT: Decomposing Vision Transformers for Collaborative Inference in Edge Devices G Xu, Z Hao, Y Luo, H Hu, J An, S Mao IEEE Transactions on Mobile Computing, 2023 | 2 | 2023 |
Data-efficient Large Vision Models through Sequential Autoregression J Guo, Z Hao, C Wang, Y Tang, H Wu, H Hu, K Han, C Xu arXiv preprint arXiv:2402.04841, 2024 | 1 | 2024 |
GhostNetV3: Exploring the Training Strategies for Compact Models Z Liu, Z Hao, K Han, Y Tang, Y Wang arXiv preprint arXiv:2404.11202, 2024 | | 2024 |
SAM-DiffSR: Structure-Modulated Diffusion Model for Image Super-Resolution C Wang, Z Hao, Y Tang, J Guo, Y Yang, K Han, Y Wang arXiv preprint arXiv:2402.17133, 2024 | | 2024 |