Follow
xinsong zhang
xinsong zhang
ByteDance AI Lab
Verified email at bytedance.com
Title
Cited by
Cited by
Year
Multi-grained vision language pre-training: Aligning texts with visual concepts
Y Zeng, X Zhang, H Li
arXiv preprint arXiv:2111.08276, 2021
1952021
Multi-labeled relation extraction with attentive capsule network
X Zhang, P Li, W Jia, H Zhao
Proceedings of the AAAI Conference on Artificial Intelligence 33 (01), 7484-7491, 2019
692019
Neural relation extraction via inner-sentence noise reduction and transfer learning
T Liu, X Zhang, W Zhou, W Jia
arXiv preprint arXiv:1808.06738, 2018
692018
AMBERT: A pre-trained language model with multi-grained tokenization
X Zhang, P Li, H Li
arXiv preprint arXiv:2008.11869, 2020
472020
X2-VLM: All-In-One Pre-trained Model For Vision-Language Tasks
Y Zeng, X Zhang, H Li, J Wang, J Zhang, W Zhou
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2023
302023
GAN driven semi-distant supervision for relation extraction
P Li, X Zhang, W Jia, H Zhao
Proceedings of the 2019 Conference of the North American Chapter of the …, 2019
212019
Robust neural relation extraction via multi-granularity noises reduction
X Zhang, T Liu, P Li, W Jia, H Zhao
IEEE Transactions on Knowledge and Data Engineering 33 (9), 3297-3310, 2020
182020
Efficientvlm: Fast and accurate vision-language models via knowledge distillation and modal-adaptive pruning
T Wang, W Zhou, Y Zeng, X Zhang
arXiv preprint arXiv:2210.07795, 2022
142022
Cross-view language modeling: Towards unified cross-lingual cross-modal pre-training
Y Zeng, W Zhou, A Luo, Z Cheng, X Zhang
arXiv preprint arXiv:2206.00621, 2022
132022
Toward building general foundation models for language, vision, and vision-language understanding tasks
X Zhang, Y Zeng, J Zhang, H Li
arXiv preprint arXiv:2301.05065, 2023
112023
Write and paint: Generative vision-language models are unified modal learners
S Diao, W Zhou, X Zhang, J Wang
The Eleventh International Conference on Learning Representations, 2022
112022
Vlue: A multi-task multi-dimension benchmark for evaluating vision-language pre-training
W Zhou, Y Zeng, S Diao, X Zhang
International Conference on Machine Learning, 27395-27411, 2022
92022
Vlue: A multi-task benchmark for evaluating vision-language models
W Zhou, Y Zeng, S Diao, X Zhang
arXiv preprint arXiv:2205.15237, 2022
92022
Active testing: An unbiased evaluation method for distantly supervised relation extraction
P Li, X Zhang, W Jia, W Zhao
arXiv preprint arXiv:2010.08777, 2020
52020
Prefix language models are unified modal learners
S Diao, W Zhou, X Zhang, J Wang
arXiv preprint arXiv:2206.07699 3, 2022
42022
Fine-grained relation extraction with focal multi-task learning
X Zhang, T Liu, W Jia, P Li
Science China. Information Sciences 63 (6), 169103, 2020
12020
Neural Typing Entities in Chinese-Pedia
Y You, S Zhang, J Lou, X Zhang, W Jia
Web and Big Data: Second International Joint Conference, APWeb-WAIM 2018 …, 2018
2018
The system can't perform the operation now. Try again later.
Articles 1–17