Multi-grained vision language pre-training: Aligning texts with visual concepts Y Zeng, X Zhang, H Li arXiv preprint arXiv:2111.08276, 2021 | 195 | 2021 |
Multi-labeled relation extraction with attentive capsule network X Zhang, P Li, W Jia, H Zhao Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 7484-7491, 2019 | 69 | 2019 |
Neural relation extraction via inner-sentence noise reduction and transfer learning T Liu, X Zhang, W Zhou, W Jia arXiv preprint arXiv:1808.06738, 2018 | 69 | 2018 |
AMBERT: A pre-trained language model with multi-grained tokenization X Zhang, P Li, H Li arXiv preprint arXiv:2008.11869, 2020 | 47 | 2020 |
X2-VLM: All-In-One Pre-trained Model For Vision-Language Tasks Y Zeng, X Zhang, H Li, J Wang, J Zhang, W Zhou IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023 | 30 | 2023 |
GAN driven semi-distant supervision for relation extraction P Li, X Zhang, W Jia, H Zhao Proceedings of the 2019 Conference of the North American Chapter of the …, 2019 | 21 | 2019 |
Robust neural relation extraction via multi-granularity noises reduction X Zhang, T Liu, P Li, W Jia, H Zhao IEEE Transactions on Knowledge and Data Engineering 33 (9), 3297-3310, 2020 | 18 | 2020 |
Efficientvlm: Fast and accurate vision-language models via knowledge distillation and modal-adaptive pruning T Wang, W Zhou, Y Zeng, X Zhang arXiv preprint arXiv:2210.07795, 2022 | 14 | 2022 |
Cross-view language modeling: Towards unified cross-lingual cross-modal pre-training Y Zeng, W Zhou, A Luo, Z Cheng, X Zhang arXiv preprint arXiv:2206.00621, 2022 | 13 | 2022 |
Toward building general foundation models for language, vision, and vision-language understanding tasks X Zhang, Y Zeng, J Zhang, H Li arXiv preprint arXiv:2301.05065, 2023 | 11 | 2023 |
Write and paint: Generative vision-language models are unified modal learners S Diao, W Zhou, X Zhang, J Wang The Eleventh International Conference on Learning Representations, 2022 | 11 | 2022 |
Vlue: A multi-task multi-dimension benchmark for evaluating vision-language pre-training W Zhou, Y Zeng, S Diao, X Zhang International Conference on Machine Learning, 27395-27411, 2022 | 9 | 2022 |
Vlue: A multi-task benchmark for evaluating vision-language models W Zhou, Y Zeng, S Diao, X Zhang arXiv preprint arXiv:2205.15237, 2022 | 9 | 2022 |
Active testing: An unbiased evaluation method for distantly supervised relation extraction P Li, X Zhang, W Jia, W Zhao arXiv preprint arXiv:2010.08777, 2020 | 5 | 2020 |
Prefix language models are unified modal learners S Diao, W Zhou, X Zhang, J Wang arXiv preprint arXiv:2206.07699 3, 2022 | 4 | 2022 |
Fine-grained relation extraction with focal multi-task learning X Zhang, T Liu, W Jia, P Li Science China. Information Sciences 63 (6), 169103, 2020 | 1 | 2020 |
Neural Typing Entities in Chinese-Pedia Y You, S Zhang, J Lou, X Zhang, W Jia Web and Big Data: Second International Joint Conference, APWeb-WAIM 2018 …, 2018 | | 2018 |