Flowformer: A transformer architecture for optical flow Z Huang*, X Shi*, C Zhang, Q Wang, KC Cheung, H Qin, J Dai, H Li Computer Vision–ECCV 2022: 17th European Conference, Tel Aviv, Israel …, 2022 | 195 | 2022 |
Fuseformer: Fusing fine-grained information in transformers for video inpainting R Liu, H Deng, Y Huang, X Shi, L Lu, W Sun, X Wang, J Dai, H Li Proceedings of the IEEE/CVF international conference on computer vision …, 2021 | 118 | 2021 |
Decoupled spatial-temporal transformer for video inpainting R Liu, H Deng, Y Huang, X Shi, L Lu, W Sun, X Wang, J Dai, H Li arXiv preprint arXiv:2104.06637, 2021 | 49 | 2021 |
Flowformer++: Masked cost volume autoencoding for pretraining optical flow estimation X Shi, Z Huang, D Li, M Zhang, KC Cheung, S See, H Qin, J Dai, H Li Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 44 | 2023 |
Kbnet: Kernel basis network for image restoration Y Zhang, D Li, X Shi, D He, K Song, X Wang, H Qin, H Li arXiv preprint arXiv:2303.02881, 2023 | 28 | 2023 |
Videoflow: Exploiting temporal cues for multi-frame optical flow estimation X Shi, Z Huang, W Bian, D Li, M Zhang, KC Cheung, S See, H Qin, J Dai, ... Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023 | 27 | 2023 |
A simple baseline for video restoration with grouped spatial-temporal shift D Li, X Shi, Y Zhang, KC Cheung, S See, X Wang, H Qin, H Li Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 18 | 2023 |
Blinkflow: A dataset to push the limits of event-based optical flow estimation Y Li, Z Huang, S Chen, X Shi, H Li, H Bao, Z Cui, G Zhang 2023 IEEE/RSJ International Conference on Intelligent Robots and Systems …, 2023 | 10 | 2023 |
Motion-I2V: Consistent and Controllable Image-to-Video Generation with Explicit Motion Modeling X Shi, Z Huang, FY Wang, W Bian, D Li, Y Zhang, M Zhang, KC Cheung, ... arXiv preprint arXiv:2401.15977, 2024 | 6 | 2024 |
A unified conditional framework for diffusion-based image restoration Y Zhang, X Shi, D Li, X Wang, J Wang, H Li Advances in Neural Information Processing Systems 36, 2024 | 4 | 2024 |
Context-tap: Tracking any point demands spatial context features W Bian, Z Huang, X Shi, Y Dong, Y Li, H Li arXiv e-prints, arXiv: 2306.02000, 2023 | 4 | 2023 |
No attention is needed: Grouped spatial-temporal shift for simple and efficient video restorers D Li, X Shi, Y Zhang, X Wang, H Qin, H Li CoRR, abs/2206.10810, 2022 | 4 | 2022 |
AnimateLCM: Accelerating the Animation of Personalized Diffusion Models and Adapters with Decoupled Consistency Learning FY Wang, Z Huang, X Shi, W Bian, G Song, Y Liu, H Li arXiv preprint arXiv:2402.00769, 2024 | 2 | 2024 |
Flowformer: A transformer architecture and its masked cost volume autoencoding for optical flow Z Huang, X Shi, C Zhang, Q Wang, Y Li, H Qin, J Dai, X Wang, H Li arXiv preprint arXiv:2306.05442, 2023 | 2 | 2023 |
Context-PIPs: Persistent Independent Particles Demands Context Features W Bian, Z Huang, X Shi, Y Dong, Y Li, H Li Advances in Neural Information Processing Systems 36, 2024 | 1 | 2024 |
Be-Your-Outpainter: Mastering Video Outpainting through Input-Specific Adaptation FY Wang, X Wu, Z Huang, X Shi, D Shen, G Song, Y Liu, H Li arXiv preprint arXiv:2403.13745, 2024 | | 2024 |
Context-PIPs: Persistent Independent Particles Demands Spatial Context Features W Bian, Z Huang, X Shi, Y Dong, Y Li, H Li arXiv preprint arXiv:2306.02000, 2023 | | 2023 |
VideoFlow: Supplementary Material X Shi, Z Huang, W Bian, D Li, M Zhang, KC Cheung, S See, H Qin, J Dai, ... | | |
Supplementary Material–A Simple Baseline for Video Restoration with Grouped Spatial-temporal Shift D Li, X Shi, Y Zhang, KC Cheung, S See, X Wang, H Qin, H Li | | |
FlowFormer: A Transformer Architecture for Optical Flow–Supplementary Materials Z Huang*, X Shi*, C Zhang, Q Wang, K Chun, HQ Cheung, J Dai, H Li | | |