Follow
Shengwei An
Shengwei An
Verified email at purdue.edu - Homepage
Title
Cited by
Cited by
Year
Backdoor scanning for deep neural networks through k-arm optimization
G Shen, Y Liu, G Tao, S An, Q Xu, S Cheng, S Ma, X Zhang
International Conference on Machine Learning, 9525-9536, 2021
952021
Better trigger inversion optimization in backdoor scanning
G Tao, G Shen, Y Liu, S An, Q Xu, S Ma, P Li, X Zhang
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
532022
Piccolo: Exposing complex backdoors in nlp transformer models
Y Liu, G Shen, G Tao, S An, S Ma, X Zhang
2022 IEEE Symposium on Security and Privacy (SP), 2025-2042, 2022
512022
An invisible black-box backdoor attack through frequency domain
T Wang, Y Yao, F Xu, S An, H Tong, T Wang
European Conference on Computer Vision, 396-413, 2022
432022
Model orthogonalization: Class distance hardening in neural networks for better security
G Tao, Y Liu, G Shen, Q Xu, S An, Z Zhang, X Zhang
2022 IEEE Symposium on Security and Privacy (SP), 1372-1389, 2022
412022
Backdoor attack through frequency domain
T Wang, Y Yao, F Xu, S An, H Tong, T Wang
arXiv preprint arXiv:2111.10991, 2021
332021
Flip: A provable defense framework for backdoor mitigation in federated learning
K Zhang, G Tao, Q Xu, S Cheng, S An, Y Liu, S Feng, G Shen, PY Chen, ...
arXiv preprint arXiv:2210.12873, 2022
292022
Mirror: Model inversion for deep learning network with high fidelity
S An, G Tao, Q Xu, Y Liu, G Shen, Y Yao, J Xu, X Zhang
Proceedings of the 29th Network and Distributed System Security Symposium, 2022
292022
Constrained optimization with dynamic bound-scaling for effective nlp backdoor defense
G Shen, Y Liu, G Tao, Q Xu, Z Zhang, S An, S Ma, X Zhang
International Conference on Machine Learning, 19879-19892, 2022
252022
Augmented example-based synthesis using relational perturbation properties
S An, R Singh, S Misailovic, R Samanta
Proceedings of the ACM on Programming Languages 4 (POPL), 1-24, 2019
152019
Backdoor vulnerabilities in normally trained deep learning models
G Tao, Z Wang, S Cheng, S Ma, S An, Y Liu, G Shen, Z Zhang, Y Mao, ...
arXiv preprint arXiv:2211.15929, 2022
112022
An event-based formal framework for dynamic software update
S An, X Ma, C Cao, P Yu, C Xu
2015 IEEE International Conference on Software Quality, Reliability and …, 2015
92015
Beagle: Forensics of deep learning backdoor attack for better defense
S Cheng, G Tao, Y Liu, S An, X Xu, S Feng, G Shen, K Zhang, Q Xu, S Ma, ...
arXiv preprint arXiv:2301.06241, 2023
72023
Medic: Remove model backdoors via importance driven cloning
Q Xu, G Tao, J Honorio, Y Liu, S An, G Shen, S Cheng, X Zhang
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
52023
Hard-label black-box universal adversarial patch attack
G Tao, S An, S Cheng, G Shen, X Zhang
32nd USENIX Security Symposium (USENIX Security 23), 697-714, 2023
32023
Deck: Model hardening for defending pervasive backdoors
G Tao, Y Liu, S Cheng, S An, Z Zhang, Q Xu, G Shen, X Zhang
arXiv preprint arXiv:2206.09272, 2022
32022
Elijah: Eliminating backdoors injected in diffusion models via distribution shift
S An, SY Chou, K Zhang, Q Xu, G Tao, G Shen, S Cheng, S Ma, PY Chen, ...
Proceedings of the AAAI Conference on Artificial Intelligence 38 (10), 10847 …, 2024
22024
Django: Detecting trojans in object detection models via gaussian focus calibration
G Shen, S Cheng, G Tao, K Zhang, Y Liu, S An, S Ma, X Zhang
Advances in Neural Information Processing Systems 36, 2024
22024
Remove Model Backdoors via Importance Driven Cloning
Q Xu, G Tao, J Honorio, Y Liu, S An, G Shen, S Cheng, X Zhang
IEEE Conference on Computer Vision and Pattern Recognition, 2023
22023
Confidence matters: Inspecting backdoors in deep neural networks via distribution transfer
T Wang, Y Yao, F Xu, M Xu, S An, T Wang
arXiv preprint arXiv:2208.06592, 2022
22022
The system can't perform the operation now. Try again later.
Articles 1–20