Survey of Vulnerabilities in Large Language Models Revealed by Adversarial Attacks E Shayegani, MAA Mamun, Y Fu, P Zaree, Y Dong, N Abu-Ghazaleh The 62nd Annual Meeting of the Association for Computational Linguistics …, 2023 | 86 | 2023 |
Jailbreak in pieces: Compositional Adversarial Attacks on Multi-Modal Language Models E Shayegani, Y Dong, N Abu-Ghazaleh ICLR 2024 Spotlight - 🏆 Best Paper Award SoCal NLP 23, 2024 | 64 | 2024 |
Plug and Pray: Exploiting off-the-shelf components of Multi-Modal Models E Shayegani, Y Dong, N Abu-Ghazaleh arXiv preprint arXiv:2307.14539, 2023 | 12 | 2023 |
Cross-Modal Safety Alignment: Is textual unlearning all you need? T Chakraborty*, E Shayegani*, Z Cai, N Abu-Ghazaleh, M Salman Asif, ... EMNLP 2024 Findings, 2024 | 5 | 2024 |
That Doesn't Go There: Attacks on Shared State in Multi-User Augmented Reality Applications C Slocum, Y Zhang, E Shayegani, P Zaree, N Abu-Ghazaleh, J Chen USENIX Security 24, 2023 | 2 | 2023 |
DeepMem: ML Models as storage channels and their (mis-) applications MAA Mamun, QM Alam, E Shaigani, P Zaree, I Alouani, N Abu-Ghazaleh arXiv preprint arXiv:2307.08811, 2023 | 2 | 2023 |