Reasoning with language model is planning with world model S Hao, Y Gu, H Ma, JJ Hong, Z Wang, DZ Wang, Z Hu EMNLP 2023, 2023 | 147 | 2023 |
ToolkenGPT: Augmenting Frozen Language Models with Massive Tools via Tool Embeddings S Hao, T Liu, Z Wang, Z Hu NeurIPS 2023 (Oral); SocalNLP 2023 (Best Paper Award), 2023 | 66 | 2023 |
BertNet: Harvesting Knowledge Graphs with Arbitrary Relations from Pretrained Language Models S Hao, B Tan, K Tang, B Ni, X Shao, H Zhang, E Xing, Z Hu ACL 2023 (Findings), 2022 | 46* | 2022 |
Does Recommend-Revise Produce Reliable Annotations? An Analysis on Missing Instances in DocRED Q Huang, S Hao, Y Ye, S Zhu, Y Feng, D Zhao ACL 2022, 2022 | 20 | 2022 |
Benchmarking Commonsense Knowledge Base Population with an Effective Evaluation Dataset T Fang, W Wang, S Choi, S Hao, H Zhang, Y Song, B He EMNLP 2021, 2021 | 20 | 2021 |
LLM Reasoners: New Evaluation, Library, and Analysis of Step-by-Step Reasoning with Large Language Models S Hao, Y Gu, H Luo, T Liu, X Shao, X Wang, S Xie, H Ma, A Samavedhi, ... arXiv preprint arXiv:2404.05221, 2024 | | 2024 |
Neural-symbolic interaction and co-evolving B Tan, S Hao, E Xing, Z Hu Compendium of Neurosymbolic Artificial Intelligence 369, 125, 2023 | | 2023 |