Follow
Zhen Xiang
Zhen Xiang
Verified email at illinois.edu - Homepage
Title
Cited by
Cited by
Year
Adversarial learning targeting deep neural network classification: A comprehensive review of defenses against attacks
DJ Miller, Z Xiang, G Kesidis
Proceedings of the IEEE 108 (3), 402-433, 2020
2352020
A Backdoor Attack against 3D Point Cloud Classifiers
Z Xiang, DJ Miller, S Chen, X Li, G Kesidis
ICCV 2021, 2021
632021
Detection of backdoors in trained classifiers without access to the training set
Z Xiang, DJ Miller, G Kesidis
IEEE Transactions on Neural Networks and Learning Systems, 2020
542020
A benchmark study of backdoor data poisoning defenses for deep neural network classifiers and a novel defense
Z Xiang, DJ Miller, G Kesidis
2019 IEEE 29th International Workshop on Machine Learning for Signal …, 2019
542019
Post-Training Detection of Backdoor Attacks for Two-Class and Multi-Attack Scenarios
Z Xiang, DJ Miller, G Kesidis
ICLR 2022, 2022
402022
Revealing backdoors, post-training, in DNN classifiers via novel inference on optimized perturbations inducing group misclassification
Z Xiang, DJ Miller, G Kesidis
ICASSP 2020-2020 IEEE International Conference on Acoustics, Speech and …, 2020
362020
Detecting scene-plausible perceptible backdoors in trained DNNs without access to the training set
Z Xiang, DJ Miller, H Wang, G Kesidis
Neural computation 33 (5), 1329-1371, 2021
22*2021
Reverse engineering imperceptible backdoor attacks on deep neural networks for detection and training set cleansing
Z Xiang, DJ Miller, G Kesidis
Computers & Security 106, 102280, 2021
202021
MM-BD: Post-Training Detection of Backdoor Attacks with Arbitrary Backdoor Pattern Types Using a Maximum Margin Statistic
H Wang, Z Xiang, DJ Miller, G Kesidis
IEEE Symposium on Security & Privacy, 2024
19*2024
Detecting backdoor attacks against point cloud classifiers
Z Xiang, DJ Miller, S Chen, X Li, G Kesidis
ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022
172022
L-red: Efficient post-training detection of imperceptible backdoor attacks without access to the training set
Z Xiang, DJ Miller, G Kesidis
ICASSP 2021-2021 IEEE International Conference on Acoustics, Speech and …, 2021
172021
Test-time detection of backdoor triggers for poisoned deep neural networks
X Li, Z Xiang, DJ Miller, G Kesidis
ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022
132022
BadChain: Backdoor chain-of-thought prompting for large language models
Z Xiang, F Jiang, Z Xiong, B Ramasubramanian, R Poovendran, B Li
ICLR 2024, 2024
82024
UMD: Unsupervised Model Detection for X2X Backdoor Attacks
Z Xiang, Z Xiong, B Li
Proceedings of the 40th International Conference on Machine Learning 202 …, 2023
82023
Revealing perceptible backdoors in DNNs, without the training set, via the maximum achievable misclassification fraction statistic
Z Xiang, DJ Miller, H Wang, G Kesidis
2020 IEEE 30th International Workshop on Machine Learning for Signal …, 2020
82020
A scalable mixture model based defense against data poisoning attacks on classifiers
X Li, DJ Miller, Z Xiang, G Kesidis
Dynamic Data Driven Applications Systems: Third International Conference …, 2020
62020
Training set cleansing of backdoor poisoning by self-supervised representation learning
H Wang, S Karami, O Dia, H Ritter, E Emamjomeh-Zadeh, J Chen, ...
ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and …, 2023
52023
ArtPrompt: ASCII Art-based Jailbreak Attacks against Aligned LLMs
F Jiang, Z Xu, L Niu, Z Xiang, B Ramasubramanian, B Li, R Poovendran
arXiv preprint arXiv:2402.11753, 2024
42024
Improved Activation Clipping for Universal Backdoor Mitigation and Test-Time Detection
H Wang, Z Xiang, DJ Miller, G Kesidis
arXiv preprint arXiv:2308.04617, 2023
32023
A BIC-Based Mixture Model Defense Against Data Poisoning Attacks on Classifiers
X Li, DJ Miller, Z Xiang, G Kesidis
2023 IEEE 33rd International Workshop on Machine Learning for Signal …, 2023
22023
The system can't perform the operation now. Try again later.
Articles 1–20