Follow
TIANDA LI
Title
Cited by
Cited by
Year
Speaker-aware BERT for multi-turn response selection in retrieval-based chatbots
JC Gu, T Li, Q Liu, ZH Ling, Z Su, S Wei, X Zhu
Proceedings of the 29th ACM International Conference on Information …, 2020
1512020
Conversation-and tree-structure losses for dialogue disentanglement
T Li, JC Gu, ZH Ling, Q Liu
Proceedings of the Second DialDoc Workshop on Document-grounded Dialogue and …, 2022
25*2022
A short study on compressing decoder-based language models
T Li, YE Mesbahi, I Kobyzev, A Rashid, A Mahmud, N Anchuri, ...
arXiv preprint arXiv:2110.08460, 2021
212021
Several experiments on investigating pretraining and knowledge-enhanced models for natural language inference
T Li, X Zhu, Q Liu, Q Chen, Z Chen, S Wei
arXiv preprint arXiv:1904.12104, 2019
192019
Pre-trained and attention-based neural networks for building noetic task-oriented dialogue systems
JC Gu, T Li, Q Liu, X Zhu, ZH Ling, YP Ruan
arXiv preprint arXiv:2004.01940, 2020
142020
SPDF: Sparse pre-training and dense fine-tuning for large language models
V Thangarasa, A Gupta, W Marshall, T Li, K Leong, D DeCoste, S Lie, ...
Uncertainty in Artificial Intelligence, 2134-2146, 2023
92023
Learning to retrieve entity-aware knowledge and generate responses with copy mechanism for task-oriented dialogue systems
CH Tan, X Yang, Z Zheng, T Li, Y Feng, JC Gu, Q Liu, D Liu, ZH Ling, ...
arXiv preprint arXiv:2012.11937, 2020
92020
How to select one among all? an empirical study towards the robustness of knowledge distillation in natural language understanding
T Li, A Rashid, A Jafari, P Sharma, A Ghodsi, M Rezagholizadeh
Findings of the Association for Computational Linguistics: EMNLP 2021, 750-762, 2021
8*2021
Deep contextualized utterance representations for response selection and dialogue analysis
JC Gu, T Li, ZH Ling, Q Liu, Z Su, YP Ruan, X Zhu
IEEE/ACM Transactions on Audio, Speech, and Language Processing 29, 2443-2455, 2021
82021
Do we need Label Regularization to Fine-tune Pre-trained Language Models?
I Kobyzev, A Jafari, M Rezagholizadeh, T Li, A Do-Omri, P Lu, P Poupart, ...
arXiv preprint arXiv:2205.12428, 2022
22022
Towards understanding label regularization for fine-tuning pre-trained language models
I Kobyzev, A Jafari, M Rezagholizadeh, T Li, A Do-Omri, P Lu, A Ghodsi, ...
arXiv preprint arXiv:2205.12428, 2022
12022
Unsupervised Pre-training with Structured Knowledge for Improving Natural Language Inference
X Yang, X Zhu, Z Shi, T Li
arXiv preprint arXiv:2109.03941, 2021
12021
Have You Made a Decision? Where? A Pilot Study on Interpretability of Polarity Analysis Based on Advising Problem
T Li, JC Gu, H Liu, Q Liu, ZH Ling, Z Su, X Zhu
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
2021
SPDF: Sparse Pre-training and Dense Fine-tuning for Large Language Models (Supplementary Material)
V Thangarasa, A Gupta, W Marshall, T Li, K Leong, D DeCoste, S Lie, ...
Developing Better Models for Dialogue Threads and Responses
T Li
The system can't perform the operation now. Try again later.
Articles 1–15