AutoDAN: Generating stealthy jailbreak prompts on aligned large language models X Liu, N Xu, M Chen, C Xiao ICLR 2024, 2023 | 143 | 2023 |
Protecting facial privacy: Generating adversarial identity masks via style-robust makeup transfer S Hu, X Liu, Y Zhang, M Li, LY Zhang, H Jin, L Wu Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2022 | 86 | 2022 |
Advhash: Set-to-set targeted attack on deep hashing with one single adversarial patch S Hu, Y Zhang, X Liu, LY Zhang, M Li, H Jin Proceedings of the 29th ACM international conference on multimedia, 2335-2343, 2021 | 28 | 2021 |
Detecting backdoors during the inference stage based on corruption robustness consistency X Liu, M Li, H Wang, S Hu, D Ye, H Jin, L Wu, C Xiao Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023 | 25 | 2023 |
Towards efficient data-centric robust machine learning with noise-based augmentation X Liu, H Wang, Y Zhang, F Wu, S Hu arXiv preprint arXiv:2203.03810, 2022 | 13 | 2022 |
Jailbreakv-28k: A benchmark for assessing the robustness of multimodal large language models against jailbreak attacks W Luo, S Ma, X Liu, X Guo, C Xiao arXiv preprint arXiv:2404.03027, 2024 | 12 | 2024 |
Deceptprompt: Exploiting llm-driven code generation via adversarial natural language instructions F Wu, X Liu, C Xiao arXiv preprint arXiv:2312.04730, 2023 | 12 | 2023 |
Don't Listen To Me: Understanding and Exploring Jailbreak Prompts of Large Language Models Z Yu, X Liu, S Liang, Z Cameron, C Xiao, N Zhang 33rd USENIX Security Symposium (USENIX Security 24), 2024 | 11 | 2024 |
Why does little robustness help? a further step towards understanding adversarial transferability Y Zhang, S Hu, LY Zhang, J Shi, M Li, X Liu, H Jin Proceedings of the 45th IEEE Symposium on Security and Privacy (S&P’24) 2, 2024 | 9* | 2024 |
Pointcrt: Detecting backdoor in 3d point cloud via corruption robustness S Hu, W Liu, M Li, Y Zhang, X Liu, X Wang, LY Zhang, J Hou Proceedings of the 31st ACM International Conference on Multimedia, 666-675, 2023 | 8 | 2023 |
Adashield: Safeguarding multimodal large language models from structure-based attack via adaptive shield prompting Y Wang, X Liu, Y Li, M Chen, C Xiao arXiv preprint arXiv:2403.09513, 2024 | 6 | 2024 |
Automatic and universal prompt injection attacks against large language models X Liu, Z Yu, Y Zhang, N Zhang, C Xiao arXiv preprint arXiv:2403.04957, 2024 | 5 | 2024 |
MuirBench: A Comprehensive Benchmark for Robust Multi-image Understanding F Wang, X Fu, JY Huang, Z Li, Q Liu, X Liu, MD Ma, N Xu, W Zhou, ... arXiv preprint arXiv:2406.09411, 2024 | 1 | 2024 |