Follow
Lovisa Hagström
Lovisa Hagström
Verified email at chalmers.se - Homepage
Title
Cited by
Cited by
Year
Transferring Knowledge from Vision to Language: How to Achieve it and how to Measure it?
T Norlund, L Hagström, R Johansson
arXiv preprint arXiv:2109.11321, 2021
172021
What do Models Learn From Training on More Than Text? Measuring Visual Commonsense Knowledge
L Hagström, R Johansson
arXiv preprint arXiv:2205.07065, 2022
62022
The Effect of Scaling, Retrieval Augmentation and Form on the Factual Consistency of Language Models
L Hagström, D Saynova, T Norlund, M Johansson, R Johansson
arXiv preprint arXiv:2311.01307, 2023
32023
How to Adapt Pre-trained Vision-and-Language Models to a Text-only Input?
L Hagström, R Johansson
arXiv preprint arXiv:2209.08982, 2022
32022
Knowledge distillation for Swedish NER models: A search for performance and efficiency
L Hagström, R Johansson
Proceedings of the 23rd Nordic Conference on Computational Linguistics …, 2021
32021
A Picture is Worth a Thousand Words: Natural Language Processing in Context
L Hagström
PQDT-Global, 2023
2023
Can We Use Small Models to Investigate Multimodal Fusion Methods?
L Hagström, T Norlund, R Johansson
Proceedings of the 2022 CLASP Conference on (Dis) embodiment, 45-50, 2022
2022
Radar sensor modelling using deep generative networks for verification of autonomous driving
L Hagström, L Sjöblom
2019
The system can't perform the operation now. Try again later.
Articles 1–8