Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it? T Norlund, L Hagström, R Johansson arXiv preprint arXiv:2109.11321, 2021 | 17 | 2021 |
What do Models Learn From Training on More Than Text? Measuring Visual Commonsense Knowledge L Hagström, R Johansson arXiv preprint arXiv:2205.07065, 2022 | 6 | 2022 |
The Effect of Scaling, Retrieval Augmentation and Form on the Factual Consistency of Language Models L Hagström, D Saynova, T Norlund, M Johansson, R Johansson arXiv preprint arXiv:2311.01307, 2023 | 3 | 2023 |
How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input? L Hagström, R Johansson arXiv preprint arXiv:2209.08982, 2022 | 3 | 2022 |
Knowledge distillation for Swedish NER models: A search for performance and efficiency L Hagström, R Johansson Proceedings of the 23rd Nordic Conference on Computational Linguistics …, 2021 | 3 | 2021 |
A Picture is Worth a Thousand Words: Natural Language Processing in Context L Hagström PQDT-Global, 2023 | | 2023 |
Can We Use Small Models to Investigate Multimodal Fusion Methods? L Hagström, T Norlund, R Johansson Proceedings of the 2022 CLASP Conference on (Dis) embodiment, 45-50, 2022 | | 2022 |
Radar sensor modelling using deep generative networks for verification of autonomous driving L Hagström, L Sjöblom | | 2019 |