Assessing phrasal representation and composition in transformers L Yu, A Ettinger arXiv preprint arXiv:2010.03763, 2020 | 72 | 2020 |
On the interplay between fine-tuning and composition in transformers L Yu, A Ettinger arXiv preprint arXiv:2105.14668, 2021 | 12 | 2021 |
Counterfactual reasoning: Testing language models' understanding of hypothetical scenarios J Li, L Yu, A Ettinger arXiv preprint arXiv:2305.16572, 2023 | 9 | 2023 |
" No, they did not": Dialogue response dynamics in pre-trained language models SJ Kim, L Yu, A Ettinger arXiv preprint arXiv:2210.02526, 2022 | 6 | 2022 |
VinaSC: Scalable Autodock Vina with fine-grained scheduling on heterogeneous platform L Yu, Z Luan, X Sun, Z Wang, H Yang 2016 IEEE International Conference on Bioinformatics and Biomedicine (BIBM …, 2016 | 3 | 2016 |
Counterfactual reasoning: Do Language Models need world knowledge for causal inference? J Li, L Yu, A Ettinger NeurIPS 2022 Workshop on Neuro Causal and Symbolic AI (nCSI), 2022 | 2 | 2022 |
Counterfactual reasoning: Do language models need world knowledge for causal understanding? J Li, L Yu, A Ettinger arXiv preprint arXiv:2212.03278, 2022 | 1 | 2022 |
Analyzing and Improving Compositionality in Neural Language Models L Yu The University of Chicago, 2021 | | 2021 |
A Black-Box Approach for Detecting the Failure Traces Y Meng, L Yu, Z Luan, D Qian, M Xie, Z Du Trustworthy Computing and Services: International Conference, ISCTCS 2013 …, 2014 | | 2014 |