Seeing is living? rethinking the security of facial liveness verification in the deepfake era C Li, L Wang, S Ji, X Zhang, Z Xi, S Guo, T Wang USENIX Security 2022, 2022 | 24 | 2022 |
An Embarrassingly Simple Backdoor Attack on Self-supervised Learning C Li, R Pang, Z Xi, T Du, S Ji, Y Yao, T Wang The 2023 International Conference on Computer Vision (ICCV' 23), 2023 | 19* | 2023 |
DeT: Defending against adversarial examples via decreasing transferability C Li, H Weng, S Ji, J Dong, Q He Cyberspace Safety and Security: 11th International Symposium, CSS 2019 …, 2019 | 9 | 2019 |
Towards certifying the asymmetric robustness for neural networks: quantification and applications C Li, S Ji, H Weng, B Li, J Shi, R Beyah, S Guo, Z Wang, T Wang IEEE Transactions on Dependable and Secure Computing 19 (6), 3987-4001, 2021 | 7 | 2021 |
Hijack Vertical Federated Learning Models As One Party P Qiu, X Zhang, S Ji, C Li, Y Pu, X Yang, T Wang IEEE Transactions on Dependable and Secure Computing, 2024 | 5* | 2024 |
IMPRESS: Evaluating the Resilience of Imperceptible Perturbations Against Unauthorized Data Usage in Diffusion-Based Generative AI B Cao, C Li, T Wang, J Jia, B Li, J Chen Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS' 23), 2023 | 2 | 2023 |
On the Security Risks of Knowledge Graph Reasoning Z Xi, T Du, C Li, R Pang, S Ji, X Luo, X Xiao, F Ma, T Wang USENIX Security 2023, 2023 | 2 | 2023 |
The Dark Side of AutoML: Towards Architectural Backdoor Search R Pang, C Li, Z Xi, S Ji, T Wang The Eleventh International Conference on Learning Representations (ICLR' 2023), 2022 | 2 | 2022 |
When Large Language Models Confront Repository-Level Automatic Program Repair: How Well They Done? Y Chen, J Wu, X Ling, C Li, Z Rui, T Luo, Y Wu arXiv preprint arXiv:2403.00448, 2024 | 1 | 2024 |
Model Extraction Attacks Revisited J Liang, R Pang, C Li, T Wang arXiv preprint arXiv:2312.05386, 2023 | 1 | 2023 |
Improving the Robustness of Transformer-based Large Language Models with Dynamic Attention L Shen, Y Pu, S Ji, C Li, X Zhang, C Ge, T Wang arXiv preprint arXiv:2311.17400, 2023 | 1 | 2023 |
Defending Pre-trained Language Models as Few-shot Learners against Backdoor Attacks Z Xi, T Du, C Li, R Pang, S Ji, J Chen, F Ma, T Wang Thirty-seventh Conference on Neural Information Processing Systems (NeurIPS' 23), 2023 | 1 | 2023 |
Reasoning over Multi-view Knowledge Graphs Z Xi, R Pang, C Li, T Du, S Ji, F Ma, T Wang arXiv preprint arXiv:2209.13702, 2022 | 1 | 2022 |
On the Difficulty of Defending Contrastive Learning against Backdoor Attacks C Li, R Pang, B Cao, Z Xi, J Chen, S Ji, T Wang USENIX Security 2024, 2023 | | 2023 |
A Change of Heart: Backdoor Attacks on Security-Centric Diffusion Models C Li, R Pang, B Cao, J Chen, T Wang | | 2023 |
Neural Architectural Backdoors R Pang, C Li, Z Xi, S Ji, T Wang arXiv preprint arXiv:2210.12179, 2022 | | 2022 |
Towards Robust Reasoning over Knowledge Graphs Z Xi, R Pang, C Li, S Ji, X Luo, X Xiao, T Wang arXiv preprint arXiv:2110.14693, 2021 | | 2021 |