Follow
Tu Vu
Tu Vu
Research Scientist, Google DeepMind; Assistant Professor, Virginia Tech
Verified email at google.com - Homepage
Title
Cited by
Cited by
Year
Gemini: a family of highly capable multimodal models
G Team, R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, ...
arXiv preprint arXiv:2312.11805, 2023
9002023
The Flan Collection: Designing Data and Methods for Effective Instruction Tuning
S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ...
ICML, 2023
4052023
SPoT: Better Frozen Model Adaptation through Soft Prompt Transfer
T Vu, B Lester, N Constant, R Al-Rfou, D Cer
ACL, 2022
2212022
Exploring and Predicting Transferability across NLP Tasks
T Vu, T Wang, T Munkhdalai, A Sordoni, A Trischler, A Mattarella-Micke, ...
EMNLP, 2020
1422020
JAIST: Combining Multiple Features for Answer Selection in Community Question Answering
Q Tran, V Tran, T Vu, M Nguyen, S Pham
SemEval@NAACL, 2015
962015
Gemini: A family of highly capable multimodal models
R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805 1, 2023
922023
Sentence Simplification with Memory-Augmented Neural Networks
T Vu, B Hu, T Munkhdalai, H Yu
NAACL, 2018
822018
Gemini: A family of highly capable multimodal models
R Anil, S Borgeaud, Y Wu, JB Alayrac, J Yu, R Soricut, J Schalkwyk, ...
arXiv preprint arXiv:2312.11805 1, 2023
75*2023
FreshLLMs: Refreshing Large Language Models with Search Engine Augmentation
T Vu, M Iyyer, X Wang, N Constant, J Wei, J Wei, C Tar, YH Sung, D Zhou, ...
ACL, 2024
602024
STraTA: Self-Training with Task Augmentation for Better Few-shot Learning
T Vu, MT Luong, QV Le, G Simon, M Iyyer
EMNLP, 2021
522021
Overcoming Catastrophic Forgetting in Zero-Shot Cross-Lingual Generation
T Vu, A Barua, B Lester, D Cer, M Iyyer, N Constant
EMNLP, 2022
452022
Mixture-of-experts meets instruction tuning: A winning combination for large language models
S Shen, L Hou, Y Zhou, N Du, S Longpre, J Wei, HW Chung, B Zoph, ...
ICLR, 2024
352024
Learning to simplify children stories with limited data
T Vu, G Tran, S Pham
ACIIDS, 2014
222014
Flan-MoE: Scaling Instruction-Finetuned Language Models with Sparse Mixture of Experts
S Shen, L Hou, Y Zhou, N Du, S Longpre, J Wei, HW Chung, B Zoph, ...
ICLR, 2024
212024
Integrating Multiplicative Features into Supervised Distributional Methods for Lexical Entailment
T Vu, V Shwartz
*SEM@NAACL, 2018
152018
Self-Evaluation Improves Selective Generation in Large Language Models
J Ren, Y Zhao, T Vu, PJ Liu, B Lakshminarayanan
ICBINB@NeurIPS, 2023
112023
Dialect-robust Evaluation of Generated Text
J Sun, T Sellam, E Clark, T Vu, T Dozat, D Garrette, A Siddhant, ...
ACL, 2023
112023
Leveraging QA Datasets to Improve Generative Data Augmentation
D Mekala, T Vu, T Schick, J Shang
EMNLP, 2022
112022
The Flan Collection: Designing Data and Methods for Effective Instruction Tuning 2023
S Longpre, L Hou, T Vu, A Webson, HW Chung, Y Tay, D Zhou, QV Le, ...
arXiv preprint arXiv:2301.13688, 2023
42023
Encouraging Paragraph Embeddings to Remember Sentence Identity Improves Classification
T Vu, M Iyyer
ACL, 2019
42019
The system can't perform the operation now. Try again later.
Articles 1–20