追蹤
Zhili Liu
Zhili Liu
在 connect.ust.hk 的電子郵件地址已通過驗證
標題
引用次數
引用次數
年份
Mixed autoencoder for self-supervised visual representation learning
K Chen, Z Liu, L Hong, H Xu, Z Li, DY Yeung
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2023
252023
EHSOD: CAM-guided end-to-end hybrid-supervised object detection with cascade refinement
L Fang, H Xu, Z Liu, S Parisot, Z Li
Proceedings of the AAAI Conference on Artificial Intelligence 34 (07), 10778 …, 2020
222020
Task-Customized Self-Supervised Pre-training with Scalable Dynamic Routing
Z Liu, J Han, K Chen, L Hong, H Xu, C Xu, Z Li
Proceedings of the AAAI Conference on Artificial Intelligence 55, 65, 2022
21*2022
DiffFit: Unlocking Transferability of Large Diffusion Models via Simple Parameter-Efficient Fine-Tuning
E Xie, L Yao, H Shi, Z Liu, D Zhou, Z Liu, J Li, Z Li
IEEE InternationalConference on Computer Vision, 2023, 2023
192023
Mixture of cluster-conditional lora experts for vision-language instruction tuning
Y Gou, Z Liu, K Chen, L Hong, H Xu, A Li, DY Yeung, JT Kwok, Y Zhang
arXiv preprint arXiv:2312.12379, 2023
172023
Task-customized Masked Autoencoder via Mixture of Cluster-conditional Experts
Z Liu, K Chen, J Han, H Lanqing, H Xu, Z Li, J Kwok
International Conference on Learning Representations, 0
16*
Your contrastive learning is secretly doing stochastic neighbor embedding
T Hu, Z Liu, F Zhou, W Wang, W Huang
International Conference on Learning Representations, 2022
152022
Geom-erasing: Geometry-driven removal of implicit concept in diffusion models
Z Liu, K Chen, Y Zhang, J Han, L Hong, H Xu, Z Li, DY Yeung, J Kwok
arXiv preprint arXiv:2310.05873, 2023
102023
Trackdiffusion: Multi-object tracking data generation via diffusion models
P Li, Z Liu, K Chen, L Hong, Y Zhuge, DY Yeung, H Lu, X Jia
arXiv preprint arXiv:2312.00651, 2023
82023
Relaxed conditional image transfer for semi-supervised domain adaptation
Q Luo, Z Liu, L Hong, C Li, K Yang, L Wang, F Zhou, G Li, Z Li, J Zhu
arXiv preprint arXiv:2101.01400, 2021
42021
Eyes closed, safety on: Protecting multimodal llms via image-to-text transformation
Y Gou, K Chen, Z Liu, L Hong, H Xu, Z Li, DY Yeung, JT Kwok, Y Zhang
arXiv preprint arXiv:2403.09572, 2024
32024
MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric
H Lin, H Bai, Z Liu, L Hou, M Sun, L Song, Y Wei, Z Sun
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2024
12024
Mixture of insighTful Experts (MoTE): The Synergy of Thought Chains and Expert Mixtures in Self-Alignment
Z Liu, Y Gou, K Chen, L Hong, J Gao, F Mi, Y Zhang, Z Li, X Jiang, Q Liu, ...
arXiv preprint arXiv:2405.00557, 2024
2024
系統目前無法執行作業,請稍後再試。
文章 1–13