ارائه روشی مقاوم در برابر حملات تخاصمی با استفاده از فرایندهای گوسی مقیاسپذیر و رأیگیری
الموضوعات :مهران صفایانی 1 , پویان َشالبافان 2 , سید هاشم احمدی 3 , مهدیه فلاح علی آبادی 4 , عبدالرضا میرزائی 5
1 - دانشگاه صنعتی اصفهان،دانشکده مهندسی برق و کامپیوتر
2 - دانشگاه صنعتی اصفهان،دانشکده مهندسی برق و کامپیوتر
3 - دانشگاه صنعتی اصفهان،دانشکده مهندسی برق و کامپیوتر
4 - دانشگاه صنعتی اصفهان،دانشکده مهندسی برق و کامپیوتر
5 - دانشگاه صنعتی اصفهان،دانشکده مهندسی برق و کامپیوتر
الکلمات المفتاحية: شبکههای عصبی, فرایندهای گوسی, فرایندهای گوسی مقیاسپذیر, مثالهای تخاصمی,
ملخص المقالة :
در سالهای اخیر، مسئلهای تحت عنوان آسیبپذیری مدلهای مبتنی بر یادگیری ماشین مطرح گردیده است که نشان میدهد مدلهای یادگیری در مواجهه با آسیبپذیریها از مقاومت بالایی برخوردار نیستند. یکی از معروفترین آسیبها یا به بیان دیگر حملات، تزریق مثالهای تخاصمی به مدل میباشد که در این مورد، شبکههای عصبی و به ویژه شبکههای عصبی عمیق بیشترین میزان آسیبپذیری را دارند. مثالهای تخاصمی، از طریق افزودهشدن اندکی نویز هدفمند به مثالهای اصلی تولید میشوند، به طوری که از منظر کاربر انسانی تغییر محسوسی در دادهها مشاهده نمیشود اما مدلهای یادگیری ماشینی در دستهبندی دادهها به اشتباه میافتند. یکی از روشهای موفق جهت مدلکردن عدم قطعیت در دادهها، فرایندهای گوسی هستند که چندان در زمینه مثالهای تخاصمی مورد توجه قرار نگرفتهاند. یک دلیل این امر میتواند حجم محاسباتی بالای این روشها باشد که کاربردشان در مسایل واقعی را محدود میکند. در این مقاله از یک مدل فرایند گوسی مقیاسپذیر مبتنی بر ویژگیهای تصادفی بهره گرفته شده است. این مدل علاوه بر داشتن قابلیتهای فرایندهای گوسی از جهت مدلکردن مناسب عدم قطعیت در دادهها، از نظر حجم محاسبات هم مدل مطلوبی است. سپس یک فرایند مبتنی بر رأیگیری جهت مقابله با مثالهای تخاصمی ارائه میگردد. همچنین روشی به نام تعیین ارتباط خودکار به منظور اعمال وزن بیشتر به نقاط دارای اهمیت تصاویر و اعمال آن در تابع هسته فرایند گوسی پیشنهاد میگردد. در بخش نتایج نشان داده شده که مدل پیشنهادشده عملکرد بسیار مطلوبی در مقابله با حمله علامت گرادیان سریع نسبت به روشهای رقیب دارد.
[1] I. J. Goodfellow, J. Shlens, and C. Szegedy, "Explaining and harnessing adversarial examples," in Proc. 3rd Int. Conf. on Learning Representations, ICLR’15, 11 pp., San Diego, CA, USA, 7-9 May 2015.
[2] C. Szegedy, et al., "Intriguing properties of neural networks," in Proc. 2nd Int. Conf. on Learning Representations, ICLR ‘14, 15 pp., Banff, Canada, 14-16 Apr. 2014.
[3] J. Su, D. V. Vargas, and K. Sakurai, "One pixel attack for fooling deep neural networks," IEEE Trans. on Evolutionary Computation, vol. 23, no. 5, pp. 828-841, Oct. 2019.
[4] N. Martins, J. M. Cruz, T. Cruz, and P. Henriques Abreu, "Adversarial machine learning applied to intrusion and malware scenarios: a systematic review," IEEE Access, vol. 8, pp. 35403-35419, 2020.
[5] X. Peng, H. Xian, Q. Lu, and X. Lu, "Semantics aware adversarial malware examples generation for black-box attacks," Applied Soft Computing, vol. 109, Article ID: 107506, Sept. 2021.
[6] Y. Y. Chen, C. T. Chen, C. Y. Sang, Y. C. Yang, and S. H. Huang, "Adversarial attacks against reinforcement learning-based portfolio management strategy," IEEE Access, vol. 9, pp. 50667-50685, 2021.
[7] R. Ramadan, "Detecting adversarial attacks on audio-visual speech recognition using deep learning method," International J. of Speech Technology, Article ID: 02.06.2021, 21 pp., Jun. 2021
. [8] L. Yang, Q. Song, and Y. Wu, "Attacks on state-of-the-art face recognition using attentional adversarial attack generative network," Multimedia Tools and Applications, vol. 80, pp. 1-21, 2021.
[9] Y. Zuo, H. Yao, and C. Xu, "Category-level adversarial self-ensembling for domain adaptation," in Proc. IEEE Int. Conf. on Multimedia and Expo, ICME’20, 6 pp., London, UK, 6-10 Jul. 2020.
[10] Z. Wei, et al., "Heuristic black-box adversarial attacks on video recognition models," in Proc. of the 34th AAAI Conf. on Artificial Intelligence, pp. 12338-12345, New York, NY, USA, 7-12 Feb. 2020.
[11] D. Wang, et al., "Daedalus: breaking nonmaximum suppression in object detection via adversarial examples," IEEE Trans. on Cybernetics, Early Acces, pp. 1-14, 2021.
[12] I. Goodfellow, et al, "Generative adversarial nets," in Advances in Neural Information Processing Systems 27, Z. Ghahramani, M. Welling, C. Cortes, N. D. Lawrence, and K. Q. Weinberger, Eds., pp. 2672-2680, 2014.
[13] C. Blundell, J. Cornebise, K. Kavukcuoglu, and D. Wierstra, "Weight uncertainty in neural network," in Proc. of the 32nd Int. Conf. on Machine Learning, vol. 37, pp. 1613-1622, Lille, France, Jul. 2015.
[14] A. Rahimi and B. Recht, "Random features for large-scale kernel machines," Advances in Neural Information Processing Systems, NIPS’08, pp. 1177-1184, Vancouver and Whister, Canada, 3-6 Dec. 2008.
[15] K. Cutajar, E. V. Bonilla, P. Michiardi, and M. Filippone, "Random feature expansions for deep Gaussian processes," in Proc. of the 34th Int. Conf. on Machine Learning, , pp. 884-893, Sydney, Australia, Aug. 2017.
[16] A. Rozsa, E. M. Rudd, and T. E. Boult, "Adversarial diversity and hard positive generation," in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, CVPR’16, pp. 25-32, Las Vegas, NV, USA, 27-30 Jun. 2016.
[17] Y. Dong, F. Liao, T. Pang, H. Su, J. Zhu, X. Hu, and J. Li, "Boosting adversarial attacks with momentum," in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, CVPR’18, pp. 9185-9193, Salt Lake City, UT, USA, 18-23 Jun. 2018.
[18] F. Tramer, et al., "Ensemble adversarial training: attacks and defenses," in Proc. 6th Int. Conf. on Learning Representations, ICLR’18, 20 pp., Vancouver, Canada, 30 Apr.-3 May 2018.
[19] A. Kurakin, I. J. Goodfellow, and S. Bengio, "Adversarial examples in the physical world," in Proc. 5th Int. Conf. on Learning Representations, ICLR’17, 15 pp., Toulon, France, 24-26 Apr. 2017.
[20] S. M. Moosavi-Dezfooli, A. Fawzi, and P. Frossard, "Deepfool: a simple and accurate method to fool deep neural networks," in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, CVPR’16,, pp. 2574-2582, Las Vegas, NV, USA, 27-30 Jun. 2016.
[21] X. Yuan, P. He, Q. Zhu, and X. Li, "Adversarial examples: attacks and defenses for deep learning," IEEE Trans. on Neural Networks and Learning Systems, vol. 30, no. 9, pp. 2805-2824, Sept. 2019.
[22] S. M. Moosavi-Dezfooli, A. Fawzi, O. Fawzi, and P. Frossard, "Universal adversarial perturbations," in Proc. of the IEEE Conf. on Computer Vision and Pattern Recognition, CVPR’17, pp. 1765-1773, Honolulu, HI, USA, 21-26 Jul. 2017.
[23] C. Zhang, P. Benz, T. Imtiaz, and I. S. Kweon, "Understanding adversarial examples from the mutual influence of images and perturbations," in Proc. of the IEEE/CVF Conf. on Computer Vision and Pattern Recognition, CVPR’20, pp. 14509-14518, Seattle, WA, USA, 13-19 Jun. 2020.
[24] A. Demontis, M. Melis, M. Pintor, M. Jagielski, B. Biggio, A. Oprea, C. Nita-Rotaru, and F. Roli, "Why do adversarial attacks transfer? explaining transferability of evasion and poisoning attacks," in Proc. 28th USENIX Security Symp., USENIX Security, pp. 321-338, Santa Clara, CA,USA, 14-16 Aug. 2019.
[25] N. Carlini and D. A. Wagner, "Towards evaluating the robustness of neural networks," in Proc. IEEE Symp. on Security and Privacy, pp. 39-57, San Jose, CA, USA, 22-26 May 2017.
[26] J. Rony, L. G. Hafemann, L. S. Oliveira, I. B. Ayed, R. Sabourin, and E. Granger, "Decoupling direction and norm for efficient gradient-based L2 adversarial attacks and defenses," in Proc. IEEE Conf. on Computer Vision and Pattern Recognition, CVPR’19, pp. 4322-4330, , Long Beach, CA, USA, Jun. 2019.
[27] Y. Liu and F. Cao, "Self-adaptive norm update for faster gradient based L2 adversarial attacks and defenses," in Proc. of the 10th Int. Conf. on Pattern Recognition Applications and Methods, ICPRAM’21, vol. 1, pp. 15-24, Vienna, Austria, 4-6 Feb. 2021.
[28] N. Papernot, P. McDaniel, X. Wu, S. Jha, and A. Swami, "Distillation as a defense to adversarial perturbations against deep neural networks," in Proc. IEEE Symp. on Security and Privacy, pp. 582-597, San Jose, CA,USA, 22-26 May 2016.
[29] J. Bradshaw, A. G. d. G. Matthews, and Z. Ghahramani, Adversarial Examples, Uncertainty, and Transfer Testing Robustness in Gaussian Process Hybrid Deep Networks, arXiv preprint arXiv:1707.02476, 2017.
[30] J. H. Metzen, T. Genewein, V. Fischer, and B. Bischoff, "On detecting adversarial perturbations," in Proc. 5th Int. Conf. on Learning Representations, ICLR’17, 12 pp., Toulon, France, Apr. 2017.
[31] D. Hendrycks and K. Gimpel, "Early methods for detecting adversarial images," in Proc. 5th Int. Conf. on Learning Representations, Workshop Track, ICLR’17, 9 pp., Toulon, France, Apr. 2017.
[32] J. Wei, Adversarial Examples for Visual Decompilers, Master's Thesis, EECS Department, University of California, Berkeley, May 2017.
[33] C. Xie, M. Tan, B. Gong, J. Wang, A. L. Yuille, and Q. V. Le, "Adversarial examples improve image recognition," in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, CVPR’20, pp. 816-825, Seattle, WA, USA, 13-19 Jun. 2020.
[34] H. Zheng, Z. Zhang, J. Gu, H. Lee, and A. Prakash, "Efficient adversarial training with transferable adversarial examples," in Proc. IEEE/CVF Conf. on Computer Vision and Pattern Recognition, CVPR’20, pp. 1178-1187, Seattle, WA, USA, 13-19 Jun. 2020.
[35] F. Guo, et al., "Detecting adversarial examples via prediction difference for deep neural networks," Information Sciences, vol. 501, pp. 182-192, Oct. 2019.
[36] H. Zhang, M. Cisse, Y. N. Dauphin, and D. Lopez-Paz, "Mixup: beyond empirical risk minimization," in Proc. 6th Int. Conf. on Learning Representations, ICLR’18, 13 pp., Vancouver, Canada, 30 Apr.-3 May 2018.
[37] P. Pauli, A. Koch, J. Berberich, P. Kohler, and F. Allgower, "Training robust neural networks using lipschitz bounds," IEEE Control. Syst. Lett., vol. 6, pp. 121-126, 2021.
[38] A. Graves, "Practical variational inference for neural networks," in Proc. 25th Annual Conf. on Neural Information Processing Systems, NIPS’11, pp. 2348-2356, Sierra Nevada, Spain, 16-17 Dec. 2011.
[39] C. M. Bishop, Pattern Recognition and Machine Learning, Springer, 2006.
[40] J. Quinonero-Candela and C. E. Rasmussen, "A unifying view of sparse approximate gaussian process regression," J. of Machine Learning Research, vol. 6, pp. 1939-1959, Dec. 2005.
[41] V. Tresp, "A bayesian committee machine," Neural Computation, vol. 12, no. 11, pp. 2719-2741, Nov. 2000.
[42] T. Chen and J. Ren, "Bagging for gaussian process regression," Neurocomputing, vol. 72, no. 7-9, pp. 1605-1610, Mar. 2009.
[43] E. Rodner, A. Freytag, P. Bodesheim, and J. Denzler, "Large-scale gaussian process classification with flexible adaptive histogram kernels," in Proc. European Conf. on Computer Vision, ECCV’12, pp. 85-98, Florence, Italy, 7-13 Oct 2012.
[44] M. Sło´nski, "Bayesian neural networks and gaussian processes in identification of concrete properties," Computer Assisted Methods in Engineering and Science, vol. 18, no. 4, pp. 291-302, 2017.
[45] C. K. Williams and C. E. Rasmussen, Gaussian Processes for Machine Learning, MIT Press Cambridge, MA, 2006.
[46] A. Mustafa, et al., "Adversarial defense by restricting the hidden space of deep neural networks," in Proc. IEEE/CVF Int. Conf. on Computer Vision, ICCV’19, pp. 3384-3393, Seoul, South Korea, 27 Oct.-2 Nov. 2019.
[47] Y. Lecun, L. Bottou, Y. Bengio, and P. Haffner, "Gradient-based learning applied to document recognition," Proceedings of the IEEE, vol. 86, no. 11, pp. 2278-2324, Nov. 1998.
[48] H. Khosravi and E. Kabir, "Introducing a very large dataset of handwritten farsi digits and a study on their varieties," Pattern Recognit. Lett., vol. 28, no. 10, pp. 1133-1141, 2007.