Improving the faithfulness of attention-based explanations with task-specific information for text classification G Chrysostomou, N Aletras arXiv preprint arXiv:2105.02657, 2021 | 39 | 2021 |
Frustratingly simple pretraining alternatives to masked language modeling A Yamaguchi, G Chrysostomou, K Margatina, N Aletras arXiv preprint arXiv:2109.01819, 2021 | 28 | 2021 |
An empirical study on explanations in out-of-domain settings G Chrysostomou, N Aletras arXiv preprint arXiv:2203.00056, 2022 | 18 | 2022 |
Enjoy the salience: Towards better transformer-based faithful explanations with word salience G Chrysostomou, N Aletras arXiv preprint arXiv:2108.13759, 2021 | 15 | 2021 |
Flexible instance-specific rationalization of nlp models G Chrysostomou, N Aletras Proceedings of the AAAI Conference on Artificial Intelligence 36 (10), 10545 …, 2022 | 14 | 2022 |
On the impact of temporal concept drift on model explanations Z Zhao, G Chrysostomou, K Bontcheva, N Aletras arXiv preprint arXiv:2210.09197, 2022 | 9 | 2022 |
Variable instance-level explainability for text classification G Chrysostomou, N Aletras arXiv, 2021 | 4 | 2021 |
Lighter, yet More Faithful: Investigating Hallucinations in Pruned Large Language Models for Abstractive Summarization G Chrysostomou, Z Zhao, M Williams, N Aletras arXiv preprint arXiv:2311.09335, 2023 | | 2023 |
Explainable Natural Language Processing G Chrysostomou Computational Linguistics 48 (4), 1137-1139, 2022 | | 2022 |
Model Interpretability for Natural Language Processing Applications G Chrysostomou University of Sheffield, 2022 | | 2022 |