Responsive image
博碩士論文 etd-0026121-163714 詳細資訊
Title page for etd-0026121-163714
論文名稱
Title
深度學習於變換車道違規辨識之研究
The Research of Deep Learning for Recognizing Lane Change Violations
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
63
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2021-01-26
繳交日期
Date of Submission
2021-01-26
關鍵字
Keywords
ResNet、深度學習、YOLOv4、車道跨越辨識、後車燈狀態辨識、變換車道違規辨識
Lane change violations recognition, Deep learning, ResNet, YOLOv4, Rear light status recognition, Lane crossing recognition
統計
Statistics
本論文已被瀏覽 444 次,被下載 220
The thesis/dissertation has been browsed 444 times, has been downloaded 220 times.
中文摘要
隨著行車記錄器的普及與交通安全意識的提升,2019 年民眾檢舉交通違規已
達 87 萬 6074 件,交通大隊需耗費許多人力與時間處理驗證檢舉影片,違反了其
業務的比例原則,交通部擬設檢舉天花板以減少業務量。
而自 2015 年的 ILSVRC 比賽後,其冠軍的影像辨識錯誤率已經低於人類辨
識的 5.1%了,故本研究提出方法利用深度學習技術查驗「國道未依規定變換車
道」違規事實,以公正、自動化的方式降低驗證交通違規檢舉影片的人力成本。
最後除了提出方法並實驗確實可行外,還對車輛辨識、後車燈狀態辨識、車道跨
越辨識、違規變換車道辨識提出了從資料集製作到訓練模型、測試模型的整合解
決方案,並且提供了各種評比數據供未來建置參考。
Abstract
With the popularization of driving recorders and the increase in traffic safety
awareness, the number of people reporting traffic violations in 2019 has reached
870,060. The traffic brigade has to spend a lot of manpower and time to process and
verify the reporting videos, which violates the principle of proportionality in its
business. It is proposed to set up a whistleblower ceiling to reduce business volume.
Since the 2015 ILSVRC competition, the image recognition error rate of the
champion has been lower than 5.1% of human recognition. Therefore, this research will
propose a method to use deep learning technology to check the fact that the national
highway does not change lanes in accordance with the regulations. The automated
method reduces the labor cost of verifying traffic violation reporting films. In addition
to the proposed methods, this research proposes an integrated solution from dataset
production to training model and test model for vehicle identification, rear light status
identification, lane crossing identification, and lane change violations identification, and
provide various evaluation data for future construction reference.
目次 Table of Contents
論文審定書 ......................................................... i
中文摘要 .......................................................... ii
Abstract ......................................................... iii
誌謝 .............................................................. iv
目錄 ............................................................... v
圖次 ............................................................ viii
表次 ............................................................... x
第一章 緒論 ........................................................ 1
1.1 研究背景與動機 .................................................... 1
1.2 研究目的 .......................................................... 1
1.3 研究流程 .......................................................... 2
第二章 文獻探討 .................................................... 3
2.1 車輛辨識 .......................................................... 3
2.1.1 R-CNN ........................................................... 3
2.1.2 YOLOv4 .......................................................... 5
2.2 後車燈狀態辨識 .................................................... 6
2.3 車道線偵測 ........................................................ 7
2.3.1 傳統電腦視覺方法 ................................................ 8
2.3.2 深度學習方法 .................................................... 9
2.4 ResNet ........................................................... 11
第三章 研究方法 ................................................... 12
3.1 系統架構 ......................................................... 12
3.2 車輛辨識模組實作 ................................................. 13
3.3 深度學習模型 ..................................................... 14
3.4 資料集處理 ....................................................... 15
3.4.1 後車燈狀態資料集 ............................................... 16
3.4.2 車道跨越資料集 ................................................. 17
3.4.3 違規變換車道資料集 ............................................. 17
3.5 後車燈狀態辨識模組實作 ........................................... 18
3.6 車道跨越辨識模組實作 ............................................. 19
3.7 違規變換車道辨識模組實作 ......................................... 20
3.7.1 雙模組法 ....................................................... 20
3.7.2 單模組法 ....................................................... 21
3.7.3 End-to-End 法................................................... 22
3.8 模型評估 ......................................................... 23
第四章 實驗結果 ................................................... 25
4.1 實驗環境 ......................................................... 25
4.2 後車燈辨識模組 ................................................... 26
4.2.1 輸入影像大小比較 ............................................... 26
4.2.2 分類數量比較 ................................................... 27
4.2.3 各版本 ResNet 比較 .............................................. 27
4.2.4 錯誤分析 ....................................................... 29
4.3 車道跨越辨識模組 ................................................. 29
4.3.1 輸入影像大小比較 ............................................... 29
4.3.2 邊界框放大倍率比較 ............................................. 30
4.3.3 各版本 ResNet 比較 .............................................. 31
4.3.4 錯誤分析 ....................................................... 32
4.4 違規變換車道辨識模組 ............................................. 33
4.4.1 單模組法 ....................................................... 33
4.4.2 End-to-End 法................................................... 35
4.4.3 三種辨識方法比較 ............................................... 36
4.5 檢舉影片驗證之策略與流程 ......................................... 37
第五章 結論 ....................................................... 44
5.1 結論 ............................................................. 44
5.2 未來研究 ......................................................... 44
第六章 參考文獻 ................................................... 46
參考文獻 References
[1] 各國智慧型手機普及率列表.
https://zh.wikipedia.org/wiki/%E5%90%84%E5%9C%8B%E6%99%BA%E6%85%
A7%E5%9E%8B%E6%89%8B%E6%A9%9F%E6%99%AE%E5%8F%8A%E7%8E
%87%E5%88%97%E8%A1%A8
[2] 台灣行車記錄器普及率僅 3 成.
https://tw.appledaily.com/finance/20150206/YC6O36HHETV7HY5533UTUSHUEA/
[3] 交通違規檢舉採實名制 違反道路交通管理事件統一裁罰基準及處理細則修
正. https://www.banqiao.police.ntpc.gov.tw/cp-73-56573-11.html
[4] 沒獎金照拍,「實名制」抓交通違規數量不減反增?民眾:檢舉不是為錢,
而是抒發「不爽」. https://www.techbang.com/posts/70385-no-bonus-photo-shootreal-name-catch-traffic-violations-of-the-number-of-non-reduction-and-increasepeople-the-report-is-not-for-money-but-to-express-uncomfortable
[5] 深度學習.
https://zh.wikipedia.org/wiki/%E6%B7%B1%E5%BA%A6%E5%AD%A6%E4%B9
%A0
[6] ImageNet. https://zh.wikipedia.org/wiki/ImageNet
[7] Alex Krizhevsky, Ilya Sutskever, Geoffrey E. Hinton. (2012). ImageNet
Classification with Deep Convolutional Neural Networks.
https://zh.wikipedia.org/wiki/AlexNet
[8] 北市交通違規檢舉魔人年破萬件 交通部擬設檢舉天花板.
https://www.ettoday.net/amp/amp_news.php?news_id=1708598
[9] 過勞問題,警察更嚴重. https://join.gov.tw/idea/detail/f5018aaf-45db-4772-
b59e-a5f2a18c91f0
47
[10] 人工智慧將取代部分人類工作,為什麼社會變得更「公平」?.
https://buzzorange.com/techorange/2019/02/26/ai-replace-human/
[11] 高雄交通亂象排名 違規停車最大宗.
https://news.cts.com.tw/cts/life/202004/202004221998098.html
[12] Computer vision object detection models: R-CNN, Fast R-CNN, Faster R-CNN,
Mask R-CNN, YOLO. https://medium.com/cuboai/%E7%89%A9%E9%AB%94%E5%81%B5%E6%B8%AC-object-detection740096ec4540
[13] Ross Girshick, Jeff Donahue, Trevor Darrell, Jitendra Malik. (2013). Rich feature
hierarchies for accurate object detection and semantic segmentation.
https://arxiv.org/abs/1311.2524
[14] Ross Girshick. (2015). Fast R-CNN. https://arxiv.org/abs/1504.08083
[15] Shaoqing Ren, Kaiming He, Ross Girshick, Jian Sun. (2015). Faster R-CNN:
Towards Real-Time Object Detection with Region Proposal Networks.
https://arxiv.org/abs/1506.01497
[16] Kaiming He, Georgia Gkioxari, Piotr Dollar, Ross Girshick. Mask R-CNN.
https://research.fb.com/wp-content/uploads/2017/08/maskrcnn.pdf
[17] 深度學習-什麼是 one stage,什麼是 two stage 物件偵測. https://chih-shenghuang821.medium.com/%E6%B7%B1%E5%BA%A6%E5%AD%B8%E7%BF%92-
%E4%BB%80%E9%BA%BC%E6%98%AFone-stage-
%E4%BB%80%E9%BA%BC%E6%98%AFtwo-stage-
%E7%89%A9%E4%BB%B6%E5%81%B5%E6%B8%AC-fc3ce505390f
[18] Alexey Bochkovskiy, Chien-Yao Wang, Hong-Yuan Mark Liao. (2020).
YOLOv4: Optimal Speed and Accuracy of Object Detection.
[19] Object detection: speed and accuracy comparison (Faster R-CNN, R-FCN, SSD,
48
FPN, RetinaNet and YOLOv3). https://jonathan-hui.medium.com/object-detectionspeed-and-accuracy-comparison-faster-r-cnn-r-fcn-ssd-and-yolo-5425656ae359
[20] A. Akhan, C. Mauricio, and V. Senem. Autonomous tracking of vehicle rear
lights and detection of brakes and turn signals. In Computational Intelligence for
Security and Defence Applications (CISDA), 2012 IEEE Symposium on, pages 1–7.
IEEE, 2012.
[21] Zhiyong Cui, Shao-Wen Yang, Chenqi Wang, Hsin-Mu Tsai. (2014). On
addressing driving inattentiveness: Robust rear light status classification using
Hierarchical Matching Pursuit.
[22] Kuan-Hui Lee, Takaaki Tagawa, Jia-En M. Pan, Adrien Gaidon, Bertrand
Douillard. (2019). An Attention-based Recurrent Convolutional Network
for Vehicle Taillight Recognition.
[23] Learning to Tell Brake and Turn Signals in Videos Using CNN-LSTM Structure.
Han-Kai Hsu, Yi-Hsuan Tsai, Xue Mei, Kuan-Hui Lee, Naoki Nagasaka, Danil
Prokhorov, Ming-Hsuan Yang IEEE International Conference on Intelligent
Transportation Systems (ITSC), 2017
[24] Mohamed Aly. (2008). Real time Detection of Lane Markers in Urban Streets.
[25] M. Bertozzi, A. Broggi. (1999). Real-time lane and obstacle detection on the
GOLD system.
[26] Xingang Pan, Jianping Shi, Ping Luo, Xiaogang Wang, Xiaoou Tang. (2018).
Spatial As Deep: Spatial CNN for Traffic Scene Understanding.
[27] The Cityscapes Dataset. https://www.cityscapes-dataset.com/
[28] TuSimple Competitions for CVPR2017. https://github.com/TuSimple/tusimplebenchmark
[29] CULane Dataset. https://xingangpan.github.io/projects/CULane.html
49
[30] Codes-for-Lane-Detection. https://github.com/cardwing/Codes-for-LaneDetection
[31] QGIS, A Free and Open Source Geographic Information System.
https://qgis.org/en/site/
[32] This code is for processing annotations of CULane dataset.
https://github.com/XingangPan/seg_label_generate
[33] Kaiming He, Xiangyu Zhang, Shaoqing Ren, Jian Sun. (2015). Deep Residual
Learning for Image Recognition.
[34] Large Scale Visual Recognition Challenge (ILSVRC). http://www.imagenet.org/challenges/LSVRC/
[35] Ground truth. https://zh.wikipedia.org/wiki/Ground_truth
[36] OpenCV. https://opencv.org/
[37] COCO dataset. https://cocodataset.org/
[38] YOLOv4 Pre-trained models. https://github.com/AlexeyAB/darknet#pre-trainedmodels
[39] Mio MiVue™ 588 規格. https://www.mio.com/tw/mivue-588
[40] 深度學習的訓練資料準備與平台之演進發展.
https://ictjournal.itri.org.tw/Content/Messagess/contents.aspx?MmmID=65430443206
1644411&MSID=1001517067307416615
[41] AI 學習筆記——End-to-End(端到端)的深度學習.
https://steemit.com/ai/@hongtao/ai-end-to-end
[42] 深度學習系列: 什麼是 AP/mAP?. https://chih-shenghuang821.medium.com/%E6%B7%B1%E5%BA%A6%E5%AD%B8%E7%BF%92
%E7%B3%BB%E5%88%97-%E4%BB%80%E9%BA%BC%E6%98%AFap-mapaaf089920848
50
[43] Precision, Recall, F1-score 簡單介紹. https://medium.com/nlp-tsupei/precisionrecall-f1-score%E7%B0%A1%E5%96%AE%E4%BB%8B%E7%B4%B9-
f87baa82a47
[44] 運用 cuDNN 深度神經網路函式庫加速機器學習.
https://blogs.nvidia.com.tw/2014/09/07/accelerate-machine-learning-cudnn-deepneural-network-library/
[45] Tensorflow 各個版本需要的 CUDA 版本以及 Cudnn 的對應關係.
https://blog.csdn.net/qq_27825451/article/details/89082978
[46] 神經網路中 Epoch、Iteration、Batchsize 相關理解和說明.
https://codertw.com/%E7%A8%8B%E5%BC%8F%E8%AA%9E%E8%A8%80/5578
16/
[47] 深度學習: 學習率 (learning rate).
https://blog.csdn.net/JNingWei/article/details/79243800
[48] 線上檢舉國道交通違規. https://www.hpb.gov.tw/p/412-1000-116.php
[49] 方向燈. https://zh.wikipedia.org/wiki/%E6%96%B9%E5%90%91%E7%87%88
[50] 方向燈只閃 3 下!歐系車「變換車道遭檢舉 19 次」挨罰 5.7 萬 申訴結果
出爐. ETtoday 社會新聞. https://www.ettoday.net/news/20200727/1769957.htm
[51] 高速公路部分路段為何「一片黑暗」?高公局:出於理性考量.
https://www.setn.com/News.aspx?NewsID=644801
[52] Saining Xie, Ross Girshick, Piotr Dollár, Zhuowen Tu, Kaiming He. Aggregated
Residual Transformations for Deep Neural Networks. (2017)
[53] Chien-Yao Wang; Hong-Yuan Mark Liao; Yueh-Hua Wu; Ping-Yang Chen; JunWei Hsieh; I-Hau Yeh. (2020). CSPNet: A New Backbone that can Enhance Learning
Capability of CNN
[54] Convolutional neural network.
51
https://en.wikipedia.org/wiki/Convolutional_neural_network
[55] Support-vector machine. https://en.wikipedia.org/wiki/Support-vector_machine
[56] Bounding boxes. https://computersciencewiki.org/index.php/Bounding_boxes
[57] Image segmentation. https://en.wikipedia.org/wiki/Image_segmentation
[58] Long short-term memory. https://en.wikipedia.org/wiki/Long_shortterm_memory
[59] Stochastic gradient descent.
https://en.wikipedia.org/wiki/Stochastic_gradient_descent
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code