Folgen
Valentin Hofmann
Valentin Hofmann
Allen Institute for AI
Bestätigte E-Mail-Adresse bei allenai.org - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Dynamic Contextualized Word Embeddings
V Hofmann, JB Pierrehumbert, H Schütze
ACL, 2021
512021
Superbizarre Is Not Superb: Derivational Morphology Improves BERT’s Interpretation of Complex Words
V Hofmann, JB Pierrehumbert, H Schütze
ACL, 2021
50*2021
DagoBERT: Generating Derivational Morphology with a Pretrained Language Model
V Hofmann, JB Pierrehumbert, H Schütze
EMNLP, 2020
272020
An Embarrassingly Simple Method to Mitigate Undesirable Properties of Pretrained Language Model Tokenizers
V Hofmann, H Schütze, J Pierrehumbert
ACL, 2022
252022
The Better Your Syntax, the Better Your Semantics? Probing Pretrained Language Models for the English Comparative Correlative
L Weissweiler, V Hofmann, A Köksal, H Schütze
EMNLP, 2022
182022
Modeling Ideological Salience and Framing in Polarized Online Groups with Graph Neural Networks and Structured Sparsity
V Hofmann, X Dong, J Pierrehumbert, H Schütze
NAACL Findings, 2022
14*2022
Predicting the Growth of Morphological Families from Social and Linguistic Factors
V Hofmann, JB Pierrehumbert, H Schütze
ACL, 2020
112020
A Graph Auto-encoder Model of Derivational Morphology
V Hofmann, H Schütze, JB Pierrehumbert
ACL, 2020
92020
The Reddit Politosphere: A Large-Scale Text and Network Resource of Online Political Discourse
V Hofmann, H Schütze, JB Pierrehumbert
ICWSM, 2022
82022
Geographic Adaptation of Pretrained Language Models
V Hofmann, G Glavaš, N Ljubešić, JB Pierrehumbert, H Schütze
TACL, 2024
72024
Dolma: An Open Corpus of Three Trillion Tokens for Language Model Pretraining Research
L Soldaini, R Kinney, A Bhagia, D Schwenk, D Atkinson, R Authur, ...
arXiv:2402.00159, 2024
42024
Counting the Bugs in ChatGPT's Wugs: A Multilingual Investigation into the Morphological Capabilities of a Large Language Model
L Weissweiler*, V Hofmann*, A Kantharuban, A Cai, R Dutt, A Hengle, ...
EMNLP, 2023
22023
CaMEL: Case Marker Extraction without Labels
L Weissweiler, V Hofmann, MJ Sabet, H Schütze
ACL, 2022
22022
Dialect prejudice predicts AI decisions about people's character, employability, and criminality
V Hofmann, PR Kalluri, D Jurafsky, S King
arXiv preprint arXiv:2403.00742, 2024
12024
Political Compass or Spinning Arrow? Towards More Meaningful Evaluations for Values and Opinions in Large Language Models
P Röttger*, V Hofmann*, V Pyatkin, M Hinck, HR Kirk, H Schütze, D Hovy
arXiv:2402.16786, 2024
2024
Graph-enhanced Large Language Models in Asynchronous Plan Reasoning
F Lin, E La Malfa, V Hofmann, EM Yang, A Cohn, JB Pierrehumbert
arXiv:2402.02805, 2024
2024
Paloma: A Benchmark for Evaluating Language Model Fit
I Magnusson, A Bhagia, V Hofmann, L Soldaini, AH Jha, O Tafjord, ...
arXiv:2312.10523, 2023
2023
Explaining Pretrained Language Models' Understanding of Linguistic Structures Using Construction Grammar
L Weissweiler, V Hofmann, A Köksal, H Schütze
Frontiers in Artificial Intelligence, 2023
2023
Unsupervised Detection of Contextualized Embedding Bias with Application to Ideology
V Hofmann, J Pierrehumbert, H Schütze
ICML, 2022
2022
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–19