How much pretraining data do language models need to learn syntax? L Pérez-Mayos, M Ballesteros, L Wanner arXiv preprint arXiv:2109.03160, 2021 | 34 | 2021 |
Part-of-Speech and Prosody-based Approaches for Robot Speech and Gesture Synchronization L Pérez-Mayos, M Farrús, J Adell Journal of Intelligent & Robotic Systems, 1-11, 0 | 12 | |
Assessing the syntactic capabilities of transformer-based multilingual language models L Pérez-Mayos, AT García, S Mille, L Wanner arXiv preprint arXiv:2105.04688, 2021 | 7 | 2021 |
On the Evolution of Syntactic Information Encoded by BERT's Contextualized Representations L Perez-Mayos, R Carlini, M Ballesteros, L Wanner Proceedings of Short Papers EACL 2021, 2021 | 7 | 2021 |
MIN_PT: An European Portuguese lexicon for minorities related terms P Fortuna, V Cortez, MS Ramalho, L Pérez-Mayos Proceedings of the 5th Workshop on Online Abuse and Harms (WOAH 2021), 76-80, 2021 | 5 | 2021 |
Improving the quality of video-to-language models by optimizing annotation of the training material L Pérez-Mayos, FM Sukno, L Wanner MultiMedia Modeling: 24th International Conference, MMM 2018, Bangkok …, 2018 | 2 | 2018 |
In-depth exploration of the syntactic capabilities of autoencoding language models for downstream applications LP Mayos Universitat Pompeu Fabra, 2022 | | 2022 |
Cartography of Natural Language Processing for Social Good (NLP4SG): Searching for Definitions, Statistics and White Spots P Fortuna, L Pérez-Mayos, L Wanner Proceedings of the 1st Workshop on NLP for Positive Impact, 19-26, 2021 | | 2021 |