Responsive image
博碩士論文 etd-1118120-170138 詳細資訊
Title page for etd-1118120-170138
論文名稱
Title
以終身學習神經網路防禦對抗例攻擊
Defense Adversarial Attack by Continual Learning in Neural Network
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
73
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2020-12-04
繳交日期
Date of Submission
2020-12-18
關鍵字
Keywords
修剪、終身學習、對抗訓練、深度學習、圖像辨識
Deep Learning, Pruning, Adversarial Training, Continual Learning, Image Recognition
統計
Statistics
本論文已被瀏覽 494 次,被下載 1
The thesis/dissertation has been browsed 494 times, has been downloaded 1 times.
中文摘要
隨著圖像辨識的技術的成熟發展,電腦視覺也透過很多著名的深度學習網路在研究上獲得令人印象深刻的效果。然而近年來出現了對抗例圖片干擾模型辨識的相關問題,如何去防範人們利用此項技術去破壞模型或是讓使用者受到損失,將成為未來如果電腦視覺的技術要商品化或是普及化使用的時候所面臨的重大問題。而本研究站在防禦的角度上,將會提出一個防禦模型,利用Madry所提出的對抗訓練以及終身學習的概念,建立一個有效且有彈性的模型。
我們將透過此研究提出CMAT的模型,作為我們防禦目前著名攻擊的防禦模型。該模型具有終身學習的概念,可以讓未來使用者自由地增加任務使模型更加強大,而且也能使模型中透過修剪的技術讓效能增加。本研究首先會實驗比較Packnet在不同網路上的效能,之後挑選最好的網路架構VGG作為我們所提出的CMAT的基礎網路。本研究會透過視覺化跟實驗數據的部分方式探討CMAT是否適用於防禦網路。本研究也是在此領域上第一個使用終身學習搭配基本的防禦技術的論文,希望可以透過此篇論文的研究結果,做為未來相關研究的實驗參考依據。
Abstract
With the mature development of image recognition technology, computer vision has also obtained impressive results in research through many well-known deep learning networks. However, in recent years, there have been problems related to image interference with model identification. The major problem is how to prevent people from using this technology to damage models or users when computer vision technology is commercialized or popularized in the future. From the perspective of defense, this research proposed a defense model, using the concepts of adversarial training and continual learning proposed by Madry to establish an effective and flexible model.
We used this research to propose a CMAT model as our defense model against current well-known attacks. This research explored whether CMAT is applicable to defense networks through visualization and experimental data. This research is also the first paper in this field that used continual learning with basic defense techniques. I hope that the results of this paper could be used as an experimental reference for future related research.
目次 Table of Contents
論文審定書 i
摘要 ii
Abstract iii
Chapter 1 Introduction 1
1.1. Motivation 1
1.2 Question Description 3
Chapter 2 Background and Related Work 5
2.1. Adversarial example 5
2.1.1. Black-Box Attack 6
2.2. Attack Method 7
2.2.1. Fast Gradient Sign Method 8
2.2.2. Project Gradient Descent Attack 9
2.3. Transferability 11
2.4. Defense Method 12
2.5 Adversarial Training 14
2.6 Continual Learning 15
2.7 Progressive Network 16
2.8 Packnet 18
2.9 CPG Network 20
Chapter 3 Proposed Method 22
Chapter 4 Experiment Results 31
4.1. Experimental Design 31
4.1.1. Cifar-100 on Packnet with Adversarial Training 33
4.1.2. Cifar-100 on CMAT with Adversarial Training 33
4.1.3. Repeat Experiments on ImageNet Dataset 33
4.2. Evaluation Metric 34
4.3. Experiment Results 35
4.4. Advanced experiment 48
Chapter 5 Conclusion 55
5.1 Discussion 55
5.2 Future Research and Limited 57
Reference 60
參考文獻 References
Abadi, M., Chu, A., Goodfellow, I., McMahan, H. B., Mironov, I., Talwar, K., & Zhang, L. (2016). Deep learning with differential privacy. Paper presented at the Proceedings of the 2016 ACM SIGSAC Conference on Computer and Communications Security.
Akhtar, N., & Mian, A. (2018). Threat of adversarial attacks on deep learning in computer vision: A survey. IEEE Access, 6, 14410-14430.
Aljundi, R., Chakravarty, P., & Tuytelaars, T. (2017). Expert gate: Lifelong learning with a network of experts. Paper presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
Athalye, A., Carlini, N., & Wagner, D. (2018). Obfuscated gradients give a false sense of security: Circumventing defenses to adversarial examples. arXiv preprint arXiv:1802.00420.
Baluja, S., & Fischer, I. (2017). Adversarial transformation networks: Learning to generate adversarial examples. arXiv preprint arXiv:1703.09387.
Bottou, L. (2010). Large-scale machine learning with stochastic gradient descent. In Proceedings of COMPSTAT'2010 (pp. 177-186): Springer.
Carlini, N., & Wagner, D. (2017). Towards evaluating the robustness of neural networks. Paper presented at the 2017 IEEE Symposium on Security and Privacy (SP).
Chen, Z., & Liu, B. (2018). Lifelong machine learning. Synthesis Lectures on Artificial Intelligence and Machine Learning, 12(3), 1-207.
Das, S., & Suganthan, P. N. (2010). Differential evolution: A survey of the state-of-the-art. IEEE transactions on evolutionary computation, 15(1), 4-31.
Deng, J., Dong, W., Socher, R., Li, L.-J., Li, K., & Fei-Fei, L. (2009). Imagenet: A large-scale hierarchical image database. Paper presented at the 2009 IEEE conference on computer vision and pattern recognition.
Deng, L. (2012). The MNIST database of handwritten digit images for machine learning research [best of the web]. IEEE Signal Processing Magazine, 29(6), 141-142.
Dwork, C. (2011). Differential privacy. Encyclopedia of Cryptography and Security, 338-340.
Eykholt, K., Evtimov, I., Fernandes, E., Li, B., Rahmati, A., Xiao, C., Prakash, A., Kohno, T., & Song, D. (2018). Robust physical-world attacks on deep learning visual classification. Paper presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
Fawzi, A., Moosavi-Dezfooli, S.-M., & Frossard, P. (2016). Robustness of classifiers: From adversarial to random noise. Paper presented at the Advances in Neural Information Processing Systems.
Ford, N., Gilmer, J., Carlini, N., & Cubuk, D. (2019). Adversarial examples are a natural consequence of test error in noise. arXiv preprint arXiv:1901.10513.
Goodfellow, I. J., Shlens, J., & Szegedy, C. (2014). Explaining and harnessing adversarial examples. arXiv preprint arXiv:1412.6572.
Guo, Y., Shi, H., Kumar, A., Grauman, K., Rosing, T., & Feris, R. (2019). Spottune: transfer learning through adaptive fine-tuning. Paper presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
Hayes, J., & Danezis, G. (2018). Learning universal adversarial perturbations with generative models. Paper presented at the 2018 IEEE Security and Privacy Workshops (SPW).
He, K., Zhang, X., Ren, S., & Sun, J. (2016). Deep residual learning for image recognition. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition.
Hendrycks, D., & Dietterich, T. (2019). Benchmarking neural network robustness to common corruptions and perturbations. arXiv preprint arXiv:1903.12261.
Hendrycks, D., Lee, K., & Mazeika, M. (2019). Using pre-training can improve model robustness and uncertainty. arXiv preprint arXiv:1901.09960.
Hinton, G. E., & Salakhutdinov, R. R. (2006). Reducing the dimensionality of data with neural networks. science, 313(5786), 504-507.
Hung, C.-Y., Tu, C.-H., Wu, C.-E., Chen, C.-H., Chan, Y.-M., & Chen, C.-S. (2019). Compacting, picking and growing for unforgetting continual learning. Paper presented at the Advances in Neural Information Processing Systems.
Kirkpatrick, J., Pascanu, R., Rabinowitz, N., Veness, J., Desjardins, G., Rusu, A. A., Milan, K., Quan, J., Ramalho, T., & Grabska-Barwinska, A. (2017). Overcoming catastrophic forgetting in neural networks. Proceedings of the National Academy of Sciences, 114(13), 3521-3526.
Krizhevsky, A., & Hinton, G. (2009). Learning multiple layers of features from tiny images.
Kurakin, A., Goodfellow, I., & Bengio, S. (2016). Adversarial examples in the physical world. arXiv preprint arXiv:1607.02533.
Liu, Q., Li, P., Zhao, W., Cai, W., Yu, S., & Leung, V. C. (2018). A survey on security threats and defensive techniques of machine learning: A data driven view. IEEE access, 6, 12103-12117.
Madaan, D., Shin, J., & Hwang, S. J. (2019). Adversarial neural pruning with latent vulnerability suppression. arXiv preprint arXiv:1908.04355.
Madry, A., Makelov, A., Schmidt, L., Tsipras, D., & Vladu, A. (2017). Towards deep learning models resistant to adversarial attacks. arXiv preprint arXiv:1706.06083.
Mallya, A., Davis, D., & Lazebnik, S. (2018). Piggyback: Adapting a single network to multiple tasks by learning to mask weights. Paper presented at the Proceedings of the European Conference on Computer Vision (ECCV).
Mallya, A., & Lazebnik, S. (2018). Packnet: Adding multiple tasks to a single network by iterative pruning. Paper presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
Masse, N. Y., Grant, G. D., & Freedman, D. J. (2018). Alleviating catastrophic forgetting using context-dependent gating and synaptic stabilization. Proceedings of the National Academy of Sciences, 115(44), E10467-E10475.
McCann, B., Keskar, N. S., Xiong, C., & Socher, R. (2018). The natural language decathlon: Multitask learning as question answering. arXiv preprint arXiv:1806.08730.
McClelland, J. L., McNaughton, B. L., & O'Reilly, R. C. (1995). Why there are complementary learning systems in the hippocampus and neocortex: insights from the successes and failures of connectionist models of learning and memory. Psychological review, 102(3), 419.
Mesnil, G., Dauphin, Y., Glorot, X., Rifai, S., Bengio, Y., Goodfellow, I., Lavoie, E., Muller, X., Desjardins, G., & Warde-Farley, D. (2011). Unsupervised and transfer learning challenge: a deep learning approach. Paper presented at the Proceedings of the 2011 International Conference on Unsupervised and Transfer Learning workshop-Volume 27.
Moosavi-Dezfooli, S.-M., Fawzi, A., & Frossard, P. (2016). Deepfool: A simple and accurate method to fool deep neural networks. Paper presented at the Proceedings of the IEEE conference on computer vision and pattern recognition.
Nelson, B., Barreno, M., Chi, F. J., Joseph, A. D., Rubinstein, B. I., Saini, U., Sutton, C., Tygar, J., & Xia, K. (2009). Misleading learners: Co-opting your spam filter. In Machine learning in cyber trust (pp. 17-51): Springer.
Netzer, Y., Wang, T., Coates, A., Bissacco, A., Wu, B., & Ng, A. Y. (2011). Reading digits in natural images with unsupervised feature learning.
Papernot, N., McDaniel, P., & Goodfellow, I. (2016). Transferability in machine learning: from phenomena to black-box attacks using adversarial samples. arXiv preprint arXiv:1605.07277.
Papernot, N., McDaniel, P., Goodfellow, I., Jha, S., Celik, Z. B., & Swami, A. (2017). Practical black-box attacks against machine learning. Paper presented at the Proceedings of the 2017 ACM on Asia conference on computer and communications security.
Papernot, N., McDaniel, P., Jha, S., Fredrikson, M., Celik, Z. B., & Swami, A. (2016). The limitations of deep learning in adversarial settings. Paper presented at the 2016 IEEE European Symposium on Security and Privacy (EuroS&P).
Papernot, N., McDaniel, P., Wu, X., Jha, S., & Swami, A. (2016). Distillation as a defense to adversarial perturbations against deep neural networks. Paper presented at the 2016 IEEE Symposium on Security and Privacy (SP).
Parisi, G. I., Kemker, R., Part, J. L., Kanan, C., & Wermter, S. (2019). Continual lifelong learning with neural networks: A review. Neural Networks, 113, 54-71.
Pfülb, B., & Gepperth, A. (2019). A comprehensive, application-oriented study of catastrophic forgetting in dnns. arXiv preprint arXiv:1905.08101.
Pillai, I., Fumera, G., & Roli, F. (2012). F-measure optimisation in multi-label classifiers. Paper presented at the Proceedings of the 21st International Conference on Pattern Recognition (ICPR2012).
Rebuffi, S.-A., Kolesnikov, A., Sperl, G., & Lampert, C. H. (2017). icarl: Incremental classifier and representation learning. Paper presented at the Proceedings of the IEEE conference on Computer Vision and Pattern Recognition.
Ribani, R., & Marengoni, M. (2019). A survey of transfer learning for convolutional neural networks. Paper presented at the 2019 32nd SIBGRAPI Conference on Graphics, Patterns and Images Tutorials (SIBGRAPI-T).
Rice, L., Wong, E., & Kolter, J. Z. (2020). Overfitting in adversarially robust deep learning. arXiv preprint arXiv:2002.11569.
Rusu, A. A., Rabinowitz, N. C., Desjardins, G., Soyer, H., Kirkpatrick, J., Kavukcuoglu, K., Pascanu, R., & Hadsell, R. (2016). Progressive neural networks. arXiv preprint arXiv:1606.04671.
Schmidt, L., Santurkar, S., Tsipras, D., Talwar, K., & Madry, A. (2018). Adversarially robust generalization requires more data. Paper presented at the Advances in Neural Information Processing Systems.
Sengupta, S., Chakraborti, T., & Kambhampati, S. (2018). MTDeep: Boosting the security of deep neural nets against adversarial attacks with moving target defense. Paper presented at the Workshops at the Thirty-Second AAAI Conference on Artificial Intelligence.
Shafahi, A., Najibi, M., Ghiasi, M. A., Xu, Z., Dickerson, J., Studer, C., Davis, L. S., Taylor, G., & Goldstein, T. (2019). Adversarial training for free! Paper presented at the Advances in Neural Information Processing Systems.
Shin, H., Lee, J. K., Kim, J., & Kim, J. (2017). Continual learning with deep generative replay. Paper presented at the Advances in Neural Information Processing Systems.
Simonyan, K., & Zisserman, A. (2014). Very deep convolutional networks for large-scale image recognition. arXiv preprint arXiv:1409.1556.
Su, D., Zhang, H., Chen, H., Yi, J., Chen, P.-Y., & Gao, Y. (2018). Is robustness the cost of accuracy?--A comprehensive study on the robustness of 18 deep image classification models. Paper presented at the Proceedings of the European Conference on Computer Vision (ECCV).
Su, J., Vargas, D. V., & Sakurai, K. (2019). One pixel attack for fooling deep neural networks. IEEE Transactions on Evolutionary Computation.
Sun, K., Zhu, Z., & Lin, Z. (2019). Towards understanding adversarial examples systematically: Exploring data size, task and model factors. arXiv preprint arXiv:1902.11019.
Szegedy, C., Zaremba, W., Sutskever, I., Bruna, J., Erhan, D., Goodfellow, I., & Fergus, R. (2013). Intriguing properties of neural networks. arXiv preprint arXiv:1312.6199.
Theagarajan, R., Chen, M., Bhanu, B., & Zhang, J. (2019). Shieldnets: Defending against adversarial attacks using probabilistic adversarial robustness. Paper presented at the Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition.
Tramèr, F., Kurakin, A., Papernot, N., Goodfellow, I., Boneh, D., & McDaniel, P. (2017). Ensemble adversarial training: Attacks and defenses. arXiv preprint arXiv:1705.07204.
Tramèr, F., Papernot, N., Goodfellow, I., Boneh, D., & McDaniel, P. (2017). The space of transferable adversarial examples. arXiv preprint arXiv:1704.03453.
Wu, Y., Chen, Y., Wang, L., Ye, Y., Liu, Z., Guo, Y., Zhang, Z., & Fu, Y. (2018). Incremental classifier learning with generative adversarial networks. arXiv preprint arXiv:1802.00853.
Yu, H., Liu, A., Liu, X., Yang, J., & Zhang, C. (2019). Towards Noise-Robust Neural Networks via Progressive Adversarial Training. arXiv preprint arXiv:1909.04839.
Zenke, F., Poole, B., & Ganguli, S. (2017). Continual learning through synaptic intelligence. Proceedings of machine learning research, 70, 3987.
Zhang, H., & Xu, W. (2019). Adversarial Interpolation Training: A Simple Approach for Improving Model Robustness.
Zhang, H., Yu, Y., Jiao, J., Xing, E. P., Ghaoui, L. E., & Jordan, M. I. (2019). Theoretically principled trade-off between robustness and accuracy. arXiv preprint arXiv:1901.08573.
Zhong, Z., Jin, L., & Xie, Z. (2015). High performance offline handwritten chinese character recognition using googlenet and directional feature maps. Paper presented at the 2015 13th International Conference on Document Analysis and Recognition (ICDAR).
Zhu, M., & Gupta, S. (2017). To prune, or not to prune: exploring the efficacy of pruning for model compression. arXiv preprint arXiv:1710.01878.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code