Follow
Huihong Shi
Title
Cited by
Cited by
Year
Vitcod: Vision transformer acceleration via dedicated algorithm and accelerator co-design
H You, Z Sun, H Shi, Z Yu, Y Zhao, Y Zhang, C Li, B Li, Y Lin
2023 IEEE International Symposium on High-Performance Computer Architecture …, 2023
432023
Vitality: Unifying low-rank and sparse approximation for vision transformer acceleration with a linear taylor attention
J Dass, S Wu, H Shi, C Li, Z Ye, Z Wang, Y Lin
2023 IEEE International Symposium on High-Performance Computer Architecture …, 2023
332023
Instant-3d: Instant neural radiance field training towards on-device ar/vr 3d reconstruction
S Li, C Li, W Zhu, B Yu, Y Zhao, C Wan, H You, H Shi, Y Lin
Proceedings of the 50th Annual International Symposium on Computer …, 2023
202023
ShiftAddNAS: Hardware-inspired search for more accurate and efficient neural networks
H You, B Li, S Huihong, Y Fu, Y Lin
International Conference on Machine Learning, 25566-25580, 2022
102022
ShiftAddViT: Mixture of multiplication primitives towards efficient vision transformer
H You, H Shi, Y Guo, Y Lin
Advances in Neural Information Processing Systems 36, 2024
82024
Max-affine spline insights into deep network pruning
H You, R Balestriero, Z Lu, Y Kou, H Shi, S Zhang, S Wu, Y Lin, ...
arXiv preprint arXiv:2101.02338, 2021
62021
Intelligent typography: Artistic text style transfer for complex texture and structure
W Mao, S Yang, H Shi, J Liu, Z Wang
IEEE Transactions on Multimedia 25, 6485-6498, 2022
52022
NASA: Neural architecture search and acceleration for hardware inspired hybrid networks
H Shi, H You, Y Zhao, Z Wang, Y Lin
Proceedings of the 41st IEEE/ACM International Conference on Computer-Aided …, 2022
42022
NASA+: Neural architecture search and acceleration for multiplication-reduced hybrid networks
H Shi, H You, Z Wang, Y Lin
IEEE Transactions on Circuits and Systems I: Regular Papers 70 (6), 2523-2536, 2023
32023
Unveiling and Harnessing Hidden Attention Sinks: Enhancing Large Language Models without Training through Attention Calibration
Z Yu, Z Wang, Y Fu, H Shi, K Shaikh, YC Lin
arXiv preprint arXiv:2406.15765, 2024
12024
An FPGA-Based Reconfigurable Accelerator for Convolution-Transformer Hybrid EfficientViT
H Shao, H Shi, W Mao, Z Wang
arXiv preprint arXiv:2403.20230, 2024
12024
A computationally efficient neural video compression accelerator based on a sparse cnn-transformer hybrid network
S Zhang, W Mao, H Shi, Z Wang
2024 Design, Automation & Test in Europe Conference & Exhibition (DATE), 1-6, 2024
12024
NASA-F: FPGA-Oriented Search and Acceleration for Multiplication-Reduced Hybrid Networks
H Shi, Y Xu, Y Wang, W Mao, Z Wang
IEEE Transactions on Circuits and Systems I: Regular Papers, 2023
12023
Max-affine spline insights into deep network pruning
R Balestriero, H You, Z Lu, Y Kou, H Shi, Y Lin, R Baraniuk
12018
ShiftAddLLM: Accelerating Pretrained LLMs via Post-Training Multiplication-Less Reparameterization
H You, Y Guo, Y Fu, W Zhou, H Shi, X Zhang, S Kundu, A Yazdanbakhsh, ...
arXiv preprint arXiv:2406.05981, 2024
2024
P-ViT: Power-of-Two Post-Training Quantization and Acceleration for Fully Quantized Vision Transformer
H Shi, X Cheng, W Mao, Z Wang
arXiv preprint arXiv:2405.19915, 2024
2024
Trio-ViT: Post-Training Quantization and Acceleration for Softmax-Free Efficient Vision Transformer
H Shi, H Shao, W Mao, Z Wang
arXiv preprint arXiv:2405.03882, 2024
2024
SR: Exploring a Double-Win Transformer-Based Framework for Ideal and Blind Super-Resolution
M She, W Mao, H Shi, Z Wang
International Conference on Artificial Neural Networks, 522-537, 2023
2023
LITNet: A Light-weight Image Transform Net for Image Style Transfer
H Shi, W Mao, Z Wang
2021 International Joint Conference on Neural Networks (IJCNN), 1-8, 2021
2021
The system can't perform the operation now. Try again later.
Articles 1–19