关注
Ming Yan
Ming Yan
Alibaba Group
没有经过验证的电子邮件地址
标题
引用次数
引用次数
年份
mplug-owl: Modularization empowers large language models with multimodality
Q Ye, H Xu, G Xu, J Ye, M Yan, Y Zhou, J Wang, A Hu, P Shi, Y Shi, C Li, ...
arXiv preprint arXiv:2304.14178, 2023
3872023
Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering
W Wang, M Yan, C Wu
arXiv preprint arXiv:1811.11934, 2018
2042018
Structbert: Incorporating language structures into pre-training for deep language understanding
W Wang, B Bi, M Yan, C Wu, Z Bao, J Xia, L Peng, L Si
arXiv preprint arXiv:1908.04577, 2019
1542019
X-clip: End-to-end multi-grained contrastive learning for video-text retrieval
Y Ma, G Xu, X Sun, M Yan, J Zhang, R Ji
Proceedings of the 30th ACM International Conference on Multimedia, 638-647, 2022
1332022
E2E-VLP: End-to-end vision-language pre-training enhanced by visual learning
H Xu, M Yan, C Li, B Bi, S Huang, W Xiao, F Huang
arXiv preprint arXiv:2106.01804, 2021
1022021
StructuralLM: Structural pre-training for form understanding
C Li, B Bi, M Yan, W Wang, S Huang, F Huang, L Si
arXiv preprint arXiv:2105.11210, 2021
1012021
Unified youtube video recommendation via cross-network collaboration
M Yan, J Sang, C Xu
Proceedings of the 5th ACM on International Conference on Multimedia …, 2015
902015
Session-aware information embedding for e-commerce product recommendation
C Wu, M Yan
Proceedings of the 2017 ACM on conference on information and knowledge …, 2017
802017
Friend transfer: Cold-start friend recommendation with cross-platform transfer learning of social knowledge
M Yan, J Sang, T Mei, C Xu
2013 IEEE International Conference on Multimedia and Expo (ICME), 1-6, 2013
792013
mplug-2: A modularized multi-modal foundation model across text, image and video
H Xu, Q Ye, M Yan, Y Shi, J Ye, Y Xu, C Li, B Bi, Q Qian, W Wang, G Xu, ...
International Conference on Machine Learning, 38728-38748, 2023
682023
mplug: Effective and efficient vision-language learning by cross-modal skip-connections
C Li, H Xu, J Tian, W Wang, M Yan, B Bi, J Ye, H Chen, G Xu, Z Cao, ...
arXiv preprint arXiv:2205.12005, 2022
662022
Twitter is faster: Personalized time-aware video recommendation from twitter to youtube
Z Deng, M Yan, J Sang, C Xu
ACM Transactions on Multimedia Computing, Communications, and Applications …, 2015
652015
Palm: Pre-training an autoencoding&autoregressive language model for context-conditioned generation
B Bi, C Li, C Wu, M Yan, W Wang, S Huang, F Huang, L Si
arXiv preprint arXiv:2004.07159, 2020
642020
mplug-owl2: Revolutionizing multi-modal large language model with modality collaboration
Q Ye, H Xu, J Ye, M Yan, H Liu, Q Qian, J Zhang, F Huang, J Zhou
arXiv preprint arXiv:2311.04257, 2023
632023
A deep cascade model for multi-document reading comprehension
M Yan, J Xia, C Wu, B Bi, Z Zhao, J Zhang, L Si, R Wang, W Wang, ...
Proceedings of the AAAI conference on artificial intelligence 33 (01), 7354-7361, 2019
562019
Mining cross-network association for youtube video promotion
M Yan, J Sang, C Xu
Proceedings of the 22nd ACM international conference on Multimedia, 557-566, 2014
502014
mplug-docowl: Modularized multimodal large language model for document understanding
J Ye, A Hu, H Xu, Q Ye, M Yan, Y Dan, C Zhao, G Xu, C Li, J Tian, Q Qi, ...
arXiv preprint arXiv:2307.02499, 2023
452023
Hitea: Hierarchical temporal-aware video-language pre-training
Q Ye, G Xu, M Yan, H Xu, Q Qian, J Zhang, F Huang
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
432023
Incorporating external knowledge into machine reading for generative question answering
B Bi, C Wu, M Yan, W Wang, J Xia, C Li
arXiv preprint arXiv:1909.02745, 2019
382019
Evaluation and analysis of hallucination in large vision-language models
J Wang, Y Zhou, G Xu, P Shi, C Zhao, H Xu, Q Ye, M Yan, J Zhang, J Zhu, ...
arXiv preprint arXiv:2308.15126, 2023
372023
系统目前无法执行此操作,请稍后再试。
文章 1–20