Exploring the use of large language models for reference-free text quality evaluation: An empirical study Y Chen, R Wang, H Jiang, S Shi, R Xu Findings of the Association for Computational Linguistics: IJCNLP-AACL 2023 …, 2023 | 48* | 2023 |
Masking and generation: An unsupervised method for sarcasm detection R Wang, Q Wang, B Liang, Y Chen, Z Wen, B Qin, R Xu Proceedings of the 45th International ACM SIGIR Conference on Research and …, 2022 | 12 | 2022 |
Self-critique prompting with large language models for inductive instructions R Wang, H Wang, F Mi, Y Chen, R Xu, KF Wong arXiv preprint arXiv:2305.13733, 2023 | 9 | 2023 |
An empirical study on multiple information sources for zero-shot fine-grained entity typing Y Chen, H Jiang, L Liu, S Shi, C Fan, M Yang, R Xu Proceedings of the 2021 Conference on Empirical Methods in Natural Language …, 2021 | 9 | 2021 |
Learning from sibling mentions with scalable graph inference in fine-grained entity typing Y Chen, J Cheng, H Jiang, L Liu, H Zhang, S Shi, R Xu Proceedings of the 60th Annual Meeting of the Association for Computational …, 2022 | 7 | 2022 |
EgoPlan-Bench: Benchmarking Egocentric Embodied Planning with Multimodal Large Language Models Y Chen, Y Ge, Y Ge, M Ding, B Li, R Wang, R Xu, Y Shan, X Liu arXiv preprint arXiv:2312.06722, 2023 | 5 | 2023 |
Retrieval-free knowledge injection through multi-document traversal for dialogue models R Wang, J Bao, F Mi, Y Chen, H Wang, Y Wang, Y Li, L Shang, KF Wong, ... Proceedings of the 61st Annual Meeting of the Association for Computational …, 2023 | 5 | 2023 |
Mcpg: A flexible multi-level controllable framework for unsupervised paraphrase generation Y Chen, H Jiang, L Liu, R Wang, S Shi, R Xu Findings of the Association for Computational Linguistics: EMNLP 2022, 5948-5958, 2022 | 5 | 2022 |
结合字形特征与迭代学习的金融领域命名实体识别 刘宇瀚, 刘常健, 徐睿峰, 骆旺达, 陈奕, 吉忠晟, 应能涛 中文信息学报 34 (11), 74-83, 2020 | 3 | 2020 |
结合金融领域情感词典和注意力机制的细粒度情感分析 祝清麟, 梁斌, 徐睿峰, 刘宇瀚, 陈奕, 毛瑞彬 中文信息学报 36 (8), 109-117, 2022 | 2 | 2022 |
Role Prompting Guided Domain Adaptation with General Capability Preserve for Large Language Models R Wang, F Mi, Y Chen, B Xue, H Wang, Q Zhu, KF Wong, R Xu arXiv preprint arXiv:2403.02756, 2024 | | 2024 |