AI Feynman 2.0: Pareto-optimal symbolic regression exploiting graph modularity SM Udrescu, A Tan, J Feng, O Neto, T Wu, M Tegmark Advances in Neural Information Processing Systems 33, 4860-4871, 2020 | 212 | 2020 |
Structured, flexible, and robust: benchmarking and improving large language models towards more human-like behavior in out-of-distribution reasoning tasks KM Collins, C Wong, J Feng, M Wei, JB Tenenbaum arXiv preprint arXiv:2205.05718, 2022 | 54 | 2022 |
How do language models bind entities in context? J Feng, J Steinhardt arXiv preprint arXiv:2310.17191, 2023 | 14 | 2023 |
Learning adaptive planning representations with natural language guidance L Wong, J Mao, P Sharma, ZS Siegel, J Feng, N Korneev, JB Tenenbaum, ... arXiv preprint arXiv:2312.08566, 2023 | 4 | 2023 |
Monitoring Latent World States in Language Models with Propositional Probes J Feng, S Russell, J Steinhardt arXiv preprint arXiv:2406.19501, 2024 | 1 | 2024 |