Follow
Lukas Struppek
Lukas Struppek
PhD Student, Artificial Intelligence and Machine Learning Lab @ Technical University of Darmstadt
Verified email at cs.tu-darmstadt.de - Homepage
Title
Cited by
Cited by
Year
Fair Diffusion: Instructing Text-to-Image Generation Models on Fairness
F Friedrich, M Brack, L Struppek, D Hintersdorf, P Schramowski, ...
arXiv preprint arXiv:2302.10893, 2023
492023
Learning to Break Deep Perceptual Hashing: The Use Case NeuralHash
L Struppek, D Hintersdorf, D Neider, K Kersting
ACM Conference on Fairness, Accountability, and Transparency (FAccT), 58-69, 2022
362022
Plug & Play Attacks: Towards Robust and Flexible Model Inversion
L Struppek, D Hintersdorf, ADA Correia, A Adler, K Kersting
International Conference on Machine Learning (ICML) 162, 20522-20545, 2022
35*2022
SEGA: Instructing Text-to-Image Models using Semantic Guidance
M Brack, F Friedrich, D Hintersdorf, L Struppek, P Schramowski, ...
Conference on Neural Information Processing Systems (NeurIPS), 2023
34*2023
Exploiting Cultural Biases via Homoglyphs in Text-to-Image Synthesis
L Struppek, D Hintersdorf, F Friedrich, M Brack, P Schramowski, ...
Journal of Artificial Intelligence Research (JAIR) 78, 1017-1068, 2023
31*2023
Rickrolling the Artist: Injecting Backdoors into Text Encoders for Text-to-Image Synthesis
L Struppek, D Hintersdorf, K Kersting
International Conference on Computer Vision (ICCV), 2023
29*2023
To Trust or Not To Trust Prediction Scores for Membership Inference Attacks
D Hintersdorf, L Struppek, K Kersting
International Joint Conference on Artificial Intelligence (IJCAI), 3043-3049, 2021
9*2021
Does CLIP Know My Face?
D Hintersdorf, L Struppek, M Brack, F Friedrich, P Schramowski, ...
Journal of Artificial Intelligence Research (JAIR), 2024
5*2024
Sparsely-gated Mixture-of-Expert Layers for CNN Interpretability
S Pavlitskaya, C Hubschneider, L Struppek, JM Zöllner
International Joint Conference on Neural Networks (IJCNN), 2023
4*2023
Leveraging Diffusion-Based Image Variations for Robust Training on Poisoned Data
L Struppek, MB Hentschel, C Poth, D Hintersdorf, K Kersting
Conference on Neural Information Processing Systems (NeurIPS) - Workshop on …, 2023
32023
Defending Our Privacy With Backdoors
D Hintersdorf, L Struppek, D Neider, K Kersting
Conference on Neural Information Processing Systems (NeurIPS) - Workshop on …, 2023
22023
Exploring the Adversarial Capabilities of Large Language Models
L Struppek, MH Le, D Hintersdorf, K Kersting
International Conference on Learning Representations (ICLR) - Workshop on …, 2024
12024
Combining AI and AM — Improving Approximate Matching through Transformer Networks
F Uhlig, L Struppek, D Hintersdorf, T Göbel, H Baier, K Kersting
Forensic Science International: Digital Investigation 45, 301570, 2023
12023
Class Attribute Inference Attacks: Inferring Sensitive Class Information by Diffusion-Based Attribute Manipulations
L Struppek, D Hintersdorf, F Friedrich, M Brack, P Schramowski, ...
arXiv preprint arXiv:2303.09289, 2023
1*2023
CollaFuse: Navigating Limited Resources and Privacy in Collaborative Generative AI
D Zipperling, S Allmendinger, L Struppek, N Kühl
European Conference on Information Systems (ECIS), 2024
2024
Be Careful What You Smooth For: Label Smoothing Can Be a Privacy Shield but Also a Catalyst for Model Inversion Attacks
L Struppek, D Hintersdorf, K Kersting
International Conference on Learning Representations (ICLR), 2024
2024
Balancing Transparency and Risk: The Security and Privacy Risks of Open-Source Machine Learning Models
D Hintersdorf, L Struppek, K Kersting
AISoLA: Bridging the Gap Between AI and Reality, 2023
2023
Investigating the Risks of Client-Side Scanning for the Use Case NeuralHash
D Hintersdorf, L Struppek, D Neider, K Kersting
🏆 IEEE Symposium on Security and Privacy - Workshop on Technology and …, 2022
2022
The system can't perform the operation now. Try again later.
Articles 1–18