关注
Peng Lu
Peng Lu
在 umontreal.ca 的电子邮件经过验证 - 首页
标题
引用次数
引用次数
年份
Sc-lstm: Learning task-specific representations in multi-task learning for sequence labeling
P Lu, T Bai, P Langlais
Proceedings of the 2019 Conference of the North American Chapter of the …, 2019
272019
RW-KD: Sample-wise loss terms re-weighting for knowledge distillation
P Lu, A Ghaddar, A Rashid, M Rezagholizadeh, A Ghodsi, P Langlais
Findings of the Association for Computational Linguistics: EMNLP 2021, 3145-3152, 2021
82021
Improving generalization of pre-trained language models via stochastic weight averaging
P Lu, I Kobyzev, M Rezagholizadeh, A Rashid, A Ghodsi, P Langlais
arXiv preprint arXiv:2212.05956, 2022
62022
Cnfrd: A few-shot rumor detection framework via capsule network for COVID-19
D Chen, X Chen, P Lu, X Wang, X Lan
International Journal of Intelligent Systems 2023, 2023
52023
DNETC: dynamic network embedding preserving both triadic closure evolution and community structures
M Yang, X Chen, B Chen, P Lu, Y Du
Knowledge and Information Systems 65 (3), 1129-1157, 2023
42023
Evolutionary prediction of nonstationary event popularity dynamics of Weibo social network using time-series characteristics
X Chen, X Lan, J Wan, P Lu, M Yang
Discrete Dynamics in Nature and Society 2021, 1-19, 2021
32021
DialogueINAB: an interaction neural network based on attitudes and behaviors of interlocutors for dialogue emotion recognition
J Ding, X Chen, P Lu, Z Yang, X Li, Y Du
The Journal of Supercomputing 79 (18), 20481-20514, 2023
22023
Do we need Label Regularization to Fine-tune Pre-trained Language Models?
I Kobyzev, A Jafari, M Rezagholizadeh, T Li, A Do-Omri, P Lu, P Poupart, ...
arXiv preprint arXiv:2205.12428, 2022
22022
Resonance RoPE: Improving Context Length Generalization of Large Language Models
S Wang, I Kobyzev, P Lu, M Rezagholizadeh, B Liu
arXiv preprint arXiv:2403.00071, 2024
12024
SiMaLSTM-SNP: novel semantic relatedness learning model preserving both Siamese networks and membrane computing
X Gu, X Chen, P Lu, X Lan, X Li, Y Du
The Journal of Supercomputing 80 (3), 3382-3411, 2024
12024
Hyperparameter optimization for Large Language Model instruction-tuning
C Tribes, S Benarroch-Lelong, P Lu, I Kobyzev
arXiv preprint arXiv:2312.00949, 2023
12023
Influence maximization in social networks using role-based embedding.
X Gu, Z Wang, X Chen, P Lu, Y Du, M Tang
Networks & Heterogeneous Media 18 (4), 2023
12023
Towards understanding label regularization for fine-tuning pre-trained language models
I Kobyzev, A Jafari, M Rezagholizadeh, T Li, A Do-Omri, P Lu, A Ghodsi, ...
arXiv preprint arXiv:2205.12428, 2022
12022
CAREA: Cotraining Attribute and Relation Embeddings for Cross-Lingual Entity Alignment in Knowledge Graphs
B Chen, X Chen, P Lu, Y Du
Discrete Dynamics in Nature and Society 2020, 1-11, 2020
12020
Efficient Classification of Long Documents via State-Space Models
P Lu, S Wang, M Rezagholizadeh, B Liu, I Kobyzev
The 2023 Conference on Empirical Methods in Natural Language Processing, 2023
2023
LABO: Towards Learning Optimal Label Regularization via Bi-level Optimization
P Lu, A Rashid, I Kobyzev, M Rezagholizadeh, P Langlais
arXiv preprint arXiv:2305.04971, 2023
2023
Methods, devices and media for re-weighting to improve knowledge distillation
P Lu, A Rashid, M Rezagholizadeh, A Ghaddar
US Patent App. 17/231,514, 2022
2022
Pseudo Knowledge Distillation: Towards Learning Optimal Instance-specific Label Smoothing Regularization
P Lu, A Rashid, I Kobyzev, M Rezagholizadeh, P Langlais
2021
Empirical study and multi-task learning exploration for neural sequence labeling models
P Lu
2019
系统目前无法执行此操作,请稍后再试。
文章 1–19