关注
Wei Wang
Wei Wang
Alibaba Group Inc.
在 alibaba-inc.com 的电子邮件经过验证
标题
引用次数
引用次数
年份
Qwen technical report
J Bai, S Bai, Y Chu, Z Cui, K Dang, X Deng, Y Fan, W Ge, Y Han, F Huang, ...
arXiv preprint arXiv:2309.16609, 2023
3262023
Structbert: Incorporating language structures into pre-training for deep language understanding
W Wang, B Bi, M Yan, C Wu, Z Bao, J Xia, L Peng, L Si
arXiv preprint arXiv:1908.04577, 2019
300*2019
Multi-granularity hierarchical attention fusion networks for reading comprehension and question answering
W Wang, M Yan, C Wu
arXiv preprint arXiv:1811.11934, 2018
2042018
RRHF: Rank responses to align language models with human feedback
H Yuan, Z Yuan, C Tan, W Wang, S Huang, F Huang
Advances in Neural Information Processing Systems 36, 2024
147*2024
mplug: Effective and efficient vision-language learning by cross-modal skip-connections
C Li, H Xu, J Tian, W Wang, M Yan, B Bi, J Ye, H Chen, G Xu, Z Cao, ...
arXiv preprint arXiv:2205.12005, 2022
129*2022
StructuralLM: Structural pre-training for form understanding
C Li, B Bi, M Yan, W Wang, S Huang, F Huang, L Si
arXiv preprint arXiv:2105.11210, 2021
1072021
mplug-2: A modularized multi-modal foundation model across text, image and video
H Xu, Q Ye, M Yan, Y Shi, J Ye, Y Xu, C Li, B Bi, Q Qian, W Wang, G Xu, ...
International Conference on Machine Learning, 38728-38748, 2023
86*2023
Palm: Pre-training an autoencoding&autoregressive language model for context-conditioned generation
B Bi, C Li, C Wu, M Yan, W Wang, S Huang, F Huang, L Si
arXiv preprint arXiv:2004.07159, 2020
642020
VECO: Variable and flexible cross-lingual pre-training for language understanding and generation
F Luo, W Wang, J Liu, Y Liu, B Bi, S Huang, F Huang, L Si
arXiv preprint arXiv:2010.16046, 2020
62*2020
A deep cascade model for multi-document reading comprehension
M Yan, J Xia, C Wu, B Bi, Z Zhao, J Zhang, L Si, R Wang, W Wang, ...
Proceedings of the AAAI conference on artificial intelligence 33 (01), 7354-7361, 2019
61*2019
How well do Large Language Models perform in Arithmetic tasks?
Z Yuan, H Yuan, C Tan, W Wang, S Huang
arXiv preprint arXiv:2304.02015, 2023
492023
IDST at TREC 2019 Deep Learning Track: Deep Cascade Ranking with Generation-based Document Expansion and Pre-trained Language Modeling.
M Yan, C Li, C Wu, B Bi, W Wang, J Xia, L Si
TREC, 2019
42*2019
Incorporating external knowledge into machine reading for generative question answering
B Bi, C Wu, M Yan, W Wang, J Xia, C Li
arXiv preprint arXiv:1909.02745, 2019
382019
SemVLP: Vision-language pre-training by aligning semantics at multiple levels
C Li, M Yan, H Xu, F Luo, W Wang, B Bi, S Huang
arXiv preprint arXiv:2103.07829, 2021
30*2021
How abilities in large language models are affected by supervised fine-tuning data composition
G Dong, H Yuan, K Lu, C Li, M Xue, D Liu, W Wang, Z Yuan, C Zhou, ...
arXiv preprint arXiv:2310.05492, 2023
252023
A unified pretraining framework for passage ranking and expansion
M Yan, C Li, B Bi, W Wang, S Huang
Proceedings of the AAAI Conference on Artificial Intelligence 35 (5), 4555-4563, 2021
162021
Addressing semantic drift in generative question answering with auxiliary extraction
C Li, B Bi, M Yan, W Wang, S Huang
Proceedings of the 59th Annual Meeting of the Association for Computational …, 2021
132021
Stronghold: fast and affordable billion-scale deep learning model training
X Sun, W Wang, S Qiu, R Yang, S Huang, J Xu, Z Wang
SC22: International Conference for High Performance Computing, Networking …, 2022
92022
Grid-vlp: Revisiting grid features for vision-language pre-training
M Yan, H Xu, C Li, B Bi, J Tian, M Gui, W Wang
arXiv preprint arXiv:2108.09479, 2021
92021
Achieving human parity on visual question answering
M Yan, H Xu, C Li, J Tian, B Bi, W Wang, W Chen, X Xu, F Wang, Z Cao, ...
arXiv preprint arXiv:2111.08896, 2021
82021
系统目前无法执行此操作,请稍后再试。
文章 1–20