Follow
Boxi Cao
Boxi Cao
Institute of Software, Chinese Academy of Sciences
Verified email at mails.ucas.edu.cn - Homepage
Title
Cited by
Cited by
Year
Knowledgeable or educated guess? revisiting language models as knowledge bases
B Cao, H Lin, X Han, L Sun, L Yan, M Liao, T Xue, J Xu
arXiv preprint arXiv:2106.09231, 2021
1292021
ToolAlpaca: Generalized Tool Learning for Language Models with 3000 Simulated Cases
Q Tang, Z Deng, H Lin, X Han, Q Liang, B Cao, L Sun
arXiv preprint arXiv:2306.05301, 2023
622023
Can prompt probe pretrained language models? understanding the invisible risks from a causal view
B Cao, H Lin, X Han, F Liu, L Sun
arXiv preprint arXiv:2203.12258, 2022
322022
Pre-training to match for unified low-shot relation extraction
F Liu, H Lin, X Han, B Cao, L Sun
arXiv preprint arXiv:2203.12274, 2022
272022
Learning in-context learning for named entity recognition
J Chen, Y Lu, H Lin, J Lou, W Jia, D Dai, H Wu, B Cao, X Han, L Sun
arXiv preprint arXiv:2305.11038, 2023
222023
The life cycle of knowledge in big language models: A survey
B Cao, H Lin, X Han, L Sun
Machine Intelligence Research 21 (2), 217-238, 2024
142024
Cuge: A chinese language understanding and generation evaluation benchmark
Y Yao, Q Dong, J Guan, B Cao, Z Zhang, C Xiao, X Wang, F Qi, J Bao, ...
arXiv preprint arXiv:2112.13610, 2021
122021
Retentive or forgetful? diving into the knowledge memorizing mechanism of language models
B Cao, Q Tang, H Lin, S Jiang, B Dong, X Han, J Chen, T Wang, L Sun
arXiv preprint arXiv:2305.09144, 2023
72023
Learning or self-aligning? rethinking instruction fine-tuning
M Ren, B Cao, H Lin, L Cao, X Han, K Zeng, G Wan, X Cai, L Sun
arXiv preprint arXiv:2402.18243, 2024
32024
Not All Contexts Are Equal: Teaching LLMs Credibility-aware Generation
R Pan, B Cao, H Lin, X Han, J Zheng, S Wang, X Cai, L Sun
arXiv preprint arXiv:2404.06809, 2024
22024
Spiral of Silences: How is Large Language Model Killing Information Retrieval?--A Case Study on Open Domain Question Answering
X Chen, B He, H Lin, X Han, T Wang, B Cao, L Sun, Y Sun
arXiv preprint arXiv:2404.10496, 2024
12024
Beyond Correctness: Benchmarking Multi-dimensional Code Generation for Large Language Models
J Zheng, B Cao, Z Ma, R Pan, H Lin, Y Lu, X Han, L Sun
arXiv preprint arXiv:2407.11470, 2024
2024
Towards Scalable Automated Alignment of LLMs: A Survey
B Cao, K Lu, X Lu, J Chen, M Ren, H Xiang, P Liu, Y Lu, B He, X Han, ...
arXiv preprint arXiv:2406.01252, 2024
2024
URL: Universal Referential Knowledge Linking via Task-instructed Representation Compression
Z Li, H Lin, T Wang, B Cao, Y Lu, W Zhou, H Wang, Z Zeng, L Sun, X Han
arXiv preprint arXiv:2404.16248, 2024
2024
Towards Universal Dense Blocking for Entity Resolution
T Wang, H Lin, X Han, X Chen, B Cao, L Sun
arXiv preprint arXiv:2404.14831, 2024
2024
Does the Correctness of Factual Knowledge Matter for Factual Knowledge-Enhanced Pre-trained Language Models?
B Cao, Q Tang, H Lin, X Han, L Sun
Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023
2023
The system can't perform the operation now. Try again later.
Articles 1–16