Prompt-specific poisoning attacks on text-to-image generative models S Shan, W Ding, J Passananti, H Zheng, BY Zhao arXiv preprint arXiv:2310.13828, 2023 | 15 | 2023 |
How to combine membership-inference attacks on multiple updated machine learning models M Jagielski, S Wu, A Oprea, J Ullman, R Geambasu Proceedings on Privacy Enhancing Technologies, 2023 | 13* | 2023 |
Tmi! finetuned models leak private information from their pretraining data J Abascal, S Wu, A Oprea, J Ullman arXiv preprint arXiv:2306.01181, 2023 | 6 | 2023 |
A Response to Glaze Purification via IMPRESS S Shan, S Wu, H Zheng, BY Zhao arXiv preprint arXiv:2312.07731, 2023 | | 2023 |
TMI! Finetuned Models Spill Secrets from Pretraining J Abascal, S Wu, A Oprea, J Ullman The Second Workshop on New Frontiers in Adversarial Machine Learning, 2023 | | 2023 |