Hardware trojan attacks on neural networks J Clements, Y Lao arXiv preprint arXiv:1806.05768, 2018 | 90 | 2018 |
Rallying adversarial techniques against deep learning for network security J Clements, Y Yang, AA Sharma, H Hu, Y Lao 2021 IEEE symposium series on computational intelligence (SSCI), 01-08, 2021 | 61 | 2021 |
Hardware trojan design on neural networks J Clements, Y Lao 2019 IEEE International Symposium on Circuits and Systems (ISCAS), 1-5, 2019 | 59 | 2019 |
Backdoor attacks on neural network operations J Clements, Y Lao 2018 IEEE Global Conference on Signal and Information Processing (GlobalSIP …, 2018 | 33 | 2018 |
Optimal sabotage attack on composite material parts B Ranabhat, J Clements, J Gatlin, KT Hsiao, M Yampolskiy International Journal of Critical Infrastructure Protection 26, 100301, 2019 | 22 | 2019 |
DeepHardMark: Towards watermarking neural network hardware J Clements, Y Lao Proceedings of the AAAI Conference on Artificial Intelligence 36 (4), 4450-4458, 2022 | 9 | 2022 |
In Pursuit of Preserving the Fidelity of Adversarial Images J Clements, Y Lao ICASSP 2022-2022 IEEE International Conference on Acoustics, Speech and …, 2022 | 1 | 2022 |
Resource Efficient Deep Learning Hardware Watermarks with Signature Alignment J Clements, Y Lao Proceedings of the AAAI Conference on Artificial Intelligence 38 (10), 11651 …, 2024 | | 2024 |
Reliable Hardware Watermarks for Deep Learning Systems JF Clements, Y Lao IEEE Transactions on Very Large Scale Integration (VLSI) Systems, 2024 | | 2024 |
Adversarial Deep Learning and Security with a Hardware Perspective J Clements Clemson University, 2023 | | 2023 |