Follow
Liam Fowl
Liam Fowl
University of Maryland
No verified email
Title
Cited by
Cited by
Year
Witches' brew: Industrial scale data poisoning via gradient matching
J Geiping, L Fowl, WR Huang, W Czaja, G Taylor, M Moeller, T Goldstein
arXiv preprint arXiv:2009.02276, 2020
1792020
Adversarially robust distillation
M Goldblum, L Fowl, S Feizi, T Goldstein
Proceedings of the AAAI conference on artificial intelligence 34 (04), 3996-4003, 2020
1782020
Metapoison: Practical general-purpose clean-label data poisoning
WR Huang, J Geiping, L Fowl, G Taylor, T Goldstein
Advances in Neural Information Processing Systems 33, 12080-12091, 2020
1722020
Strong data augmentation sanitizes poisoning and backdoor attacks without an accuracy tradeoff
E Borgnia, V Cherepanova, L Fowl, A Ghiasi, J Geiping, M Goldblum, ...
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
1082021
Deep k-NN Defense Against Clean-Label Data Poisoning Attacks
N Peri, N Gupta, WR Huang, L Fowl, C Zhu, S Feizi, T Goldstein, ...
Computer Vision–ECCV 2020 Workshops: Glasgow, UK, August 23–28, 2020 …, 2020
1082020
Robbing the fed: Directly obtaining private data in federated learning with modified models
L Fowl, J Geiping, W Czaja, M Goldblum, T Goldstein
arXiv preprint arXiv:2110.13057, 2021
1002021
Adversarial examples make strong poisons
L Fowl, M Goldblum, P Chiang, J Geiping, W Czaja, T Goldstein
Advances in Neural Information Processing Systems 34, 30339-30351, 2021
922021
Sleeper agent: Scalable hidden trigger backdoors for neural networks trained from scratch
H Souri, L Fowl, R Chellappa, M Goldblum, T Goldstein
Advances in Neural Information Processing Systems 35, 19165-19178, 2022
832022
Unraveling meta-learning: Understanding feature representations for few-shot tasks
M Goldblum, S Reich, L Fowl, R Ni, V Cherepanova, T Goldstein
International Conference on Machine Learning, 3607-3616, 2020
772020
Adversarially robust few-shot learning: A meta-learning approach
M Goldblum, L Fowl, T Goldstein
Advances in Neural Information Processing Systems 33, 17886-17895, 2020
752020
What doesn't kill you makes you robust (er): How to adversarially train against data poisoning
J Geiping, L Fowl, G Somepalli, M Goldblum, M Moeller, T Goldstein
arXiv preprint arXiv:2102.13624, 2021
642021
Understanding generalization through visualizations
WR Huang, Z Emam, M Goldblum, L Fowl, JK Terry, F Huang, T Goldstein
PMLR, 2020
622020
Fishing for user data in large-batch federated learning via gradient magnification
Y Wen, J Geiping, L Fowl, M Goldblum, T Goldstein
arXiv preprint arXiv:2202.00580, 2022
592022
Can neural nets learn the same model twice? investigating reproducibility and double descent from the decision boundary perspective
G Somepalli, L Fowl, A Bansal, P Yeh-Chiang, Y Dar, R Baraniuk, ...
Proceedings of the ieee/cvf conference on computer vision and pattern …, 2022
442022
Dp-instahide: Provably defusing poisoning and backdoor attacks with differentially private data augmentations
E Borgnia, J Geiping, V Cherepanova, L Fowl, A Gupta, A Ghiasi, ...
arXiv preprint arXiv:2103.02079, 2021
402021
Preventing unauthorized use of proprietary data: Poisoning for secure dataset release
L Fowl, P Chiang, M Goldblum, J Geiping, A Bansal, W Czaja, T Goldstein
arXiv preprint arXiv:2103.02683, 2021
342021
Decepticons: Corrupted transformers breach privacy in federated learning for language models
L Fowl, J Geiping, S Reich, Y Wen, W Czaja, M Goldblum, T Goldstein
arXiv preprint arXiv:2201.12675, 2022
312022
Robust few-shot learning with adversarially queried meta-learners
M Goldblum, L Fowl, T Goldstein
132019
Strong baseline defenses against clean-label poisoning attacks
N Gupta, WR Huang, L Fowl, C Zhu, S Feizi, T Goldstein, J Dickerson
112019
Thinking two moves ahead: Anticipating other users improves backdoor attacks in federated learning
Y Wen, J Geiping, L Fowl, H Souri, R Chellappa, M Goldblum, T Goldstein
arXiv preprint arXiv:2210.09305, 2022
102022
The system can't perform the operation now. Try again later.
Articles 1–20