Follow
Kaixuan Ji
Kaixuan Ji
Verified email at mails.tsinghua.edu.cn
Title
Cited by
Cited by
Year
P-tuning v2: Prompt tuning can be comparable to fine-tuning universally across scales and tasks
X Liu, K Ji, Y Fu, WL Tam, Z Du, Z Yang, J Tang
arXiv preprint arXiv:2110.07602, 2021
9882021
Self-play fine-tuning converts weak language models to strong language models
Z Chen, Y Deng, H Yuan, K Ji, Q Gu
arXiv preprint arXiv:2401.01335, 2024
832024
Parameter-efficient prompt tuning makes generalized and calibrated neural text retrievers
WL Tam, X Liu, K Ji, L Xue, X Zhang, Y Dong, J Liu, M Hu, J Tang
arXiv preprint arXiv:2207.07087, 2022
242022
Self-play preference optimization for language model alignment
Y Wu, Z Sun, H Yuan, K Ji, Y Yang, Q Gu
arXiv preprint arXiv:2405.00675, 2024
132024
Reinforcement learning from human feedback with active queries
K Ji, J He, Q Gu
arXiv preprint arXiv:2402.09401, 2024
32024
Self-play fine-tuning of diffusion models for text-to-image generation
H Yuan, Z Chen, K Ji, Q Gu
arXiv preprint arXiv:2402.10210, 2024
22024
Horizon-free reinforcement learning in adversarial linear mixture MDPs
K Ji, Q Zhao, J He, W Zhang, Q Gu
arXiv preprint arXiv:2305.08359, 2023
22023
BiLL-VTG: Bridging Large Language Models and Lightweight Visual Tools for Video-based Texts Generation
J Qi, K Ji, J Yu, D Wang, B Xu, L Hou, J Li
arXiv preprint arXiv:2310.10586, 2023
2023
Mastering the Task of Open Information Extraction with Large Language Models and Consistent Reasoning Environment
J Qi, K Ji, X Wang, J Yu, K Zeng, L Hou, J Li, B Xu
arXiv preprint arXiv:2310.10590, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–9