CLC number: TP391.1
On-line Access: 2024-05-06
Received: 2023-07-12
Revision Accepted: 2024-05-06
Crosschecked: 2023-10-08
Cited: 0
Clicked: 265
Citations: Bibtex RefMan EndNote GB/T7714
Wei LIN, Lichuan LIAO. Towards sustainable adversarial training with successive perturbation generation[J]. Frontiers of Information Technology & Electronic Engineering, 2024, 25(4): 527-539.
@article{title="Towards sustainable adversarial training with successive perturbation generation",
author="Wei LIN, Lichuan LIAO",
journal="Frontiers of Information Technology & Electronic Engineering",
volume="25",
number="4",
pages="527-539",
year="2024",
publisher="Zhejiang University Press & Springer",
doi="10.1631/FITEE.2300474"
}
%0 Journal Article
%T Towards sustainable adversarial training with successive perturbation generation
%A Wei LIN
%A Lichuan LIAO
%J Frontiers of Information Technology & Electronic Engineering
%V 25
%N 4
%P 527-539
%@ 2095-9184
%D 2024
%I Zhejiang University Press & Springer
%DOI 10.1631/FITEE.2300474
TY - JOUR
T1 - Towards sustainable adversarial training with successive perturbation generation
A1 - Wei LIN
A1 - Lichuan LIAO
J0 - Frontiers of Information Technology & Electronic Engineering
VL - 25
IS - 4
SP - 527
EP - 539
%@ 2095-9184
Y1 - 2024
PB - Zhejiang University Press & Springer
ER -
DOI - 10.1631/FITEE.2300474
Abstract: adversarial training with online-generated adversarial examples has achieved promising performance in defending adversarial attacks and improving robustness of convolutional neural network models. However, most existing adversarial training methods are dedicated to finding strong adversarial examples for forcing the model to learn the adversarial data distribution, which inevitably imposes a large computational overhead and results in a decrease in the generalization performance on clean data. In this paper, we show that progressively enhancing the adversarial strength of adversarial examples across training epochs can effectively improve the model robustness, and appropriate model shifting can preserve the generalization performance of models in conjunction with negligible computational cost. To this end, we propose a successive perturbation generation scheme for adversarial training (SPGAT), which progressively strengthens the adversarial examples by adding the perturbations on adversarial examples transferred from the previous epoch and shifts models across the epochs to improve the efficiency of adversarial training. The proposed SPGAT is both efficient and effective; e.g., the computation time of our method is 900 min as against the 4100 min duration observed in the case of standard adversarial training, and the performance boost is more than 7% and 3% in terms of adversarial accuracy and clean accuracy, respectively. We extensively evaluate the SPGAT on various datasets, including small-scale MNIST, middle-scale CIFAR-10, and large-scale CIFAR-100. The experimental results show that our method is more efficient while performing favorably against state-of-the-art methods.
[1]Andriushchenko M, Flammarion N, 2020. Understanding and improving fast adversarial training. Proc 34th Int Conf on Neural Information Processing Systems, Article 1346.
[2]Athalye A, Carlini N, Wagner D, 2018. Obfuscated gradients give a false sense of security: circumventing defenses to adversarial examples. Proc 35th Int Conf on Machine Learning, p.274-283.
[3]Baluja S, Fischer I, 2018. Learning to attack: adversarial transformation networks. Proc 32nd AAAI Conf on Artificial Intelligence, p.2687-2695.
[4]Buckman J, Roy A, Raffel C, et al., 2018. Thermometer encoding: one hot way to resist adversarial examples. Proc Int Conf on Learning Representations.
[5]Cai QZ, Liu C, Song D, 2018. Curriculum adversarial training. Proc 27th Int Joint Conf on Artificial Intelligence, p.3740-3747.
[6]Carlini N, Katz G, Barrett C, et al., 2017. Provably minimally-distorted adversarial examples. https://arxiv.org/abs/1709.10207
[7]Chen B, Yin JL, Chen SK, et al., 2023. An adaptive model ensemble adversarial attack for boosting adversarial transferability. http://export.arxiv.org/abs/2308.02897
[8]Chen PY, Zhang H, Sharma Y, et al., 2017. ZOO: zeroth order optimization based black-box attacks to deep neural networks without training substitute models. Proc 10th ACM Workshop on Artificial Intelligence and Security, p.15-26.
[9]Cheng YH, Lu F, Zhang XC, 2018. Appearance-based gaze estimation via evaluation-guided asymmetric regression. Proc 15th European Conf on Computer Vision, p.105-121.
[10]Croce F, Hein M, 2020. Reliable evaluation of adversarial robustness with an ensemble of diverse parameter-free attacks. https://arxiv.org/abs/2003.01690v1
[11]Ding KY, Liu XL, Niu WN, et al., 2021. A low-query black-box adversarial attack based on transferability. Knowl-Based Syst, 226:107102.
[12]Doan BG, Abbasnejad E, Ranasinghe DC, 2020. Februus: input purification defense against Trojan attacks on deep neural network systems. Proc Annual Computer Security Applications Conf, p.897-912.
[13]Dong YP, Liao FZ, Pang TY, et al., 2018. Boosting adversarial attacks with momentum. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.9185-9193.
[14]Eykholt K, Evtimov I, Fernandes E, et al., 2018. Robust physical-world attacks on deep learning visual classification. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.1625-1634.
[15]Finlayson SG, Bowers JD, Ito J, et al., 2019. Adversarial attacks on medical machine learning. Science, 363(6433):1287-1289.
[16]Goldblum M, Fowl L, Feizi S, et al., 2020. Adversarially robust distillation. Proc 34th AAAI Conference on Artificial Intelligence, p.3996-4003.
[17]Goodfellow IJ, Shlens J, Szegedy C, 2015. Explaining and harnessing adversarial examples. Proc 3rd Int Conf on Learning Representations.
[18]Guo C, Rana M, Cisse M, et al., 2017. Countering adversarial images using input transformations. Proc 6th Int Conf on Learning Representations.
[19]Hu YY, Sun SL, 2021. RL-VAEGAN: adversarial defense for reinforcement learning agents via style transfer. Knowl-Based Syst, 221:106967.
[20]Huang B, Wang Y, Wang W, 2019. Model-agnostic adversarial detection by random perturbations. Proc 28th Int Joint Conf on Artificial Intelligence, p.4689-4696.
[21]Izmailov P, Podoprikhin D, Garipov T, et al., 2018. Averaging weights leads to wider optima and better generalization. Proc 34th Conf on Uncertainty in Artificial Intelligence, p.876-885.
[22]Kariyappa S, Qureshi MK, 2019. Improving adversarial robustness of ensembles with diversity training. https://arxiv.org/abs/1901.09981
[23]Krizhevsky A, Hinton G, 2009. Learning Multiple Layers of Features from Tiny Images. Technical Report, Computer Science Department, University of Toronto, Canada.
[24]Kurakin A, Goodfellow IJ, Bengio S, 2017. Adversarial examples in the physical world. Proc 5th Int Conf on Learning Representations.
[25]Lecun Y, Bottou L, Bengio Y, et al., 1998. Gradient-based learning applied to document recognition. Proc IEEE, 86(11):2278-2324.
[26]Li B, Wang SQ, Jana S, et al., 2021. Towards understanding fast adversarial training. https://arxiv.org/abs/2006.03089v1
[27]Liu L, Du Y, Wang Y, et al., 2022. LRP2A: layer-wise relevance propagation based adversarial attacking for graph neural networks. Knowl-Based Syst, 256:109830.
[28]Madaan D, Shin J, Hwang SJ, 2020. Adversarial neural pruning with latent vulnerability suppression. Proc 37th Int Conf on Machine Learning, Article 610.
[29]Madry A, Makelov A, Schmidt L, et al., 2018. Towards deep learning models resistant to adversarial attacks. Proc 6th Int Conf on Learning Representations.
[30]Moosavi-Dezfooli SM, Fawzi A, Frossard P, 2016. DeepFool: a simple and accurate method to fool deep neural networks. Proc IEEE Conf on Computer Vision and Pattern Recognition, p.2574-2582.
[31]Pang TY, Xu K, Du C, et al., 2019. Improving adversarial robustness via promoting ensemble diversity. https://arxiv.org/abs/1901.08846
[32]Papernot N, McDaniel P, Wu X, et al., 2016a. Distillation as a defense to adversarial perturbations against deep neural networks. Proc IEEE Symp on Security and Privacy, p.582-597.
[33]Papernot N, McDaniel P, Jha S, et al., 2016b. The limitations of deep learning in adversarial settings. Proc IEEE European Symp on Security and Privacy, p.372-387.
[34]Shafahi A, Najibi M, Ghiasi A, et al., 2019. Adversarial training for free! Proc 33rd Int Conf on Neural Information Processing Systems, Article 302.
[35]Szegedy C, Zaremba W, Sutskever I, et al., 2014. Intriguing properties of neural networks. Proc Int Conf on Learning Representations.
[36]Tramer F, Kurakin A, Papernot N, et al., 2018. Ensemble adversarial training: attacks and defenses. Proc 6th Int Conf on Learning Representations.
[37]Vivek BS, Babu RV, 2020. Single-step adversarial training with dropout scheduling. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.947-956.
[38]Wang ZB, Guo HC, Zhang ZF, et al., 2021. Feature importance-aware transferable adversarial attacks. Proc IEEE/CVF Int Conf on Computer Vision, p.7639-7648.
[39]Wei XX, Liang SY, Chen N, et al., 2019. Transferable adversarial attacks for image and video object detection. Proc 28th Int Joint Conf on Artificial Intelligence, p.954-960.
[40]Wong E, Rice L, Kolter JZ, 2020. Fast is better than free: revisiting adversarial training. Proc 8th Int Conf on Learning Representations.
[41]Yamamura K, Sato H, Tateiwa N, et al., 2022. Diversified adversarial attacks based on conjugate gradient method. Proc 39th Int Conf on Machine Learning, p.24872-24894.
[42]Yang HR, Zhang JY, Dong HL, et al., 2020. DVERGE: diversifying vulnerabilities for enhanced robust generation of ensembles. Proc 34th Int Conf on Neural Information Processing Systems, Article 462.
[43]Zhang JB, Qian WH, Nie RC, et al., 2022. LP-BFGS attack: an adversarial attack based on the Hessian with limited pixels. https://arxiv.org/abs/2210.15446
[44]Zhang JF, Xu XL, Han B, et al., 2020. Attacks which do not kill training make adversarial learning stronger. Proc 27th Int Conf on Machine Learning, Article 1046.
[45]Zheng HZ, Zhang ZQ, Gu JC, et al., 2020. Efficient adversarial training with transferable adversarial examples. Proc IEEE/CVF Conf on Computer Vision and Pattern Recognition, p.1178-1187.
[46]Zhu C, Huang WR, Li HD, et al., 2019. Transferable clean-label poisoning attacks on deep neural nets. Proc 36th Int Conf on Machine Learning, p.7614-7623.
Open peer comments: Debate/Discuss/Question/Opinion
<1>