Responsive image
博碩士論文 etd-0628120-141359 詳細資訊
Title page for etd-0628120-141359
論文名稱
Title
基於Wi-Fi都卜勒感測技術之即時手語辨識系統
Real-Time Sign Language Recognition System Using Wi-Fi Based Doppler Sensing Technology
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
78
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2020-07-17
繳交日期
Date of Submission
2020-07-28
關鍵字
Keywords
Wi-Fi雷達、長短期記憶網路、都卜勒雷達、格拉姆角場、深度學習、手勢感測、手語辨識、卷積神經網路
Doppler radar, gesture sensing, Graham angle field, deep learning, sign language recognition, Wi-Fi radar, long short term memory network, convolutional neural network
統計
Statistics
本論文已被瀏覽 5818 次,被下載 0
The thesis/dissertation has been browsed 5818 times, has been downloaded 0 times.
中文摘要
本論文是使用注入鎖定正交接收機架構的被動式都卜勒雷達,並利用環境中的Wi-Fi訊號做為雷達的發射訊號源去進行手勢感測。為了有效提升感測的性能,所以在雷達系統中針對天線的部分進行一連串的測試與驗證,證明了當接收天線的數量上升時,天線在手勢方向的視角上會增加空間的維度,而強化在感測訊號中所附加的都卜勒資訊,因此選擇使用1T4R的多天線架構進行實驗。
而在訊號處理的部分是使用格拉姆角場的方法,將手語偵測所產生的I通道及Q通道時間序列數據轉換為圖像,該圖像不僅包含了時間和空間的訊息,還可以將雜訊與特徵區分開來,並且受到直流準位偏移的影響較小,因此能免去相關校正步驟使實驗流程更簡化。
接著再透過深度學習的特徵擷取方式,從感測訊號中將不同手勢的獨特特徵進行提取,並作為神經網路的輸入數據進行訓練。該神經網路模型是結合具有圖像辨識能力的卷積神經網路與具有捕捉時間相關特徵的長短期記憶網路。
最終,以較少量的數據集對簡單的神經網路模型進行訓練並成功獲得9成的辨識準確率,進而實現基於被動式都卜勒雷達的即時手語辨識系統去分辨出10種不同的台灣手語。
Abstract
This paper presents a passive Doppler radar based on an injection-locked quadrature receiver architecture that can use Wi-Fi signal in the environment as a transmission signal to perform gesture sensing. In order to effectively improve sensing performance of the radar, the antennas of the radar were set up through a series of tests and verifications. The results show that the gesture can be detected in more directions to enhance the captured Doppler information as the number of antennas increases. Therefore, a 1T4R antenna configuration was finally used in the experiments of this radar.
In the signal processing, the Graham angle field method is used to convert the I-and Q-channel series data that are produced in the detection of sign language into an image. This image not only contains time and space information but also can separate the features from noises. Moreover, it is only slightly affected by DC offsets and thus the associated calibration procedure can be omitted to simplify the experimental process.
Afterwards, the computer extracts the unique features of different sign languages through deep learning and uses these features as input data of a neural network for training. The neural network combines a convolutional neural network to recognize images and a long short term memory network to capture timing features.
Finally, a simple neural network model was trained successfully with a small amount of data sets, achieving a recognition accuracy of 90%, and then was used in a real-time sign-language recognition system based on passive Doppler radar to recognize 10 different Taiwanese sign languages.
目次 Table of Contents
論文審定書 ........................................................................................................................i
論文公開授權書 ............................................................................................................... ii
誌謝 .................................................................................................................................. iii
摘要 .................................................................................................................................. iv
Abstract .............................................................................................................................. v
目錄 .................................................................................................................................. vi
圖次 ................................................................................................................................ viii
表次 .................................................................................................................................. xi
第一章 序論 .................................................................................................................. 1
1.1 研究背景與動機 .................................................................................... 1
1.2 都卜勒雷達簡介 .................................................................................... 4
1.3 章節規劃以及研究目標 ........................................................................ 6
第二章 實驗架構 .......................................................................................................... 7
2.1 前言 ........................................................................................................ 7
2.2 系統架構 ................................................................................................ 7
2.2.1 可行性評估 ...................................................................................... 7
2.2.2 天線架構的分析 ............................................................................ 12
2.2.2.1 調整收發天線 ........................................................................ 12
2.2.2.2 增加接收天線數量 ................................................................ 22
2.2.3 最終系統架構 ................................................................................ 23
2.3 訊號處理與呈現 .................................................................................. 26
2.3.1 預處理方式的比較 ........................................................................ 26
2.3.2 Gramian Angular Field (GAF) ....................................................... 27
2.3.3 訊號處理後的結果呈現 ................................................................ 32
2.3.4 十種手勢的選擇 ............................................................................ 33
第三章 深度學習 ........................................................................................................ 35
3.1 深度學習介紹 ...................................................................................... 35
3.1.1 前言 ................................................................................................ 35
3.1.2 機器學習簡介 ................................................................................ 36
3.1.3 深度學習簡介 ................................................................................ 38
3.1.4 神經網路簡介 ................................................................................ 39
3.2 網路架構 .............................................................................................. 44
3.2.1 隱藏層架構介紹:CNN ................................................................... 44
3.2.2 隱藏層架構介紹:LSTM ................................................................ 48
3.3 訓練方法 .............................................................................................. 51
3.4 訓練結果 .............................................................................................. 53
3.4.1 使用不同神經網路做訓練 ............................................................ 53
3.4.2 交叉驗證 ........................................................................................ 55
3.4.3 測試資料集的選擇 ........................................................................ 56
3.4.4 即時辨識呈現與驗證 .................................................................... 57
第四章 結論 ................................................................................................................ 59
參考文獻 ........................................................................................................................ 60
參考文獻 References
[1] J.-K. Oh, S.-J. Cho, W.-C. Bang, W. Chang, E. Choi, J. Yang, J. Cho and D.-Y. Kim, “Inertial sensor based recognition of 3-D character gestures with an ensemble classifiers,” in Proc. 9th Int. Workshop Front. Handwrit. Recognit., Kokubunji, Tokyo, Japan, Oct. 2004, pp. 112-117.
[2] A.-A. Orlov, K.-V. Makarov and E.-S. Tarantova, “Features selection for human activity recognition in telerehabilitation,” in Proc. Int. Science Techn. Conf., Vladivostok, Russia, Mar. 2019, pp. 1-5.
[3] A. Dekate, A. Kamal and K.-S. Surekha, “Magic glove - wireless hand gesture hardware controller,” in Int. Conf. Electronics Commun. Syst. (ICECS) Dig., Coimbatore, Feb. 2014, pp. 1-4.
[4] D. Xu, “A neural network approach for hand gesture recognition in virtual reality driving training system of SPG,” in Proc. 18th Int. Conf. Pattern Recognit. (ICPR), Hong Kong, Aug. 2006, pp. 519-522.
[5] L. Dipietro, A.-M. Sabatini and P. Dario, “A survey of glove-based systems and their applications,” IEEE Trans. Syst. Man Cybern. C Appl. Rev., vol. 38, no. 4, pp. 461-482, Jul. 2008.
[6] J.-L. Hernandez-Rebollar, N. Kyriakopoulos and R.-W. Lindeman, “The AcceleGlove: A whole-hand input device for virtual reality,” in Proc. Abstr. Appl. ACM SIGGRAPH Conf., Jul. 2002, pp. 259.
[7] K.-S. Abhishek, L.-C.-F. Qubeley and D. Ho, “Glove-based hand gesture recognition sign language translator using capacitive touch sensor,” in Proc. IEEE Int. Conf. Electron Devices Solid-State Circuits (EDSSC), Hong Kong, Aug. 2016, pp. 334-337.
[8] X. Zhang, X. Chen, Y. Li, V. Lantz, K. Wang and J. Yang, “A framework for hand gesture recognition based on accelerometer and EMG sensors,” IEEE Trans. Syst. Man Cybern. A Syst. Humans, vol. 41, no. 6, pp. 1064-1076, Nov. 2011.
[9]洪雅筠, “Wii-經典遊戲走入歷史,” 匯流新聞網CNEWS, 2019年. [Online].
Available: https://cnews.com.tw/005190131a02/.
[10] E. Darrell, “Thalmic labs shows off MYO development process, demos the armband controlling tetris and a sphero,” TechCrunch-Startup Techn. News, 2013年. [Online].
Available:https://techcrunch.com/2013/04/24/thalmic-labs-shows-off-myo-development-process-demos-the-armband-controlling-tetris-and-a-sphero/
[11] G. Rogez, J.-S. Supancic and D. Ramanan, “Understanding everyday hands in action from rgb-d images,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Santiago, Dec. 2015, pp. 3889-3897.
[12] C. Choi, A. Sinha, J.-H. Choi, S. Jang and K. Ramani, “A collaborative filtering approach to real-time hand pose estimation,” in Proc. IEEE Int. Conf. Comput. Vis. (ICCV), Santiago, Dec. 2015, pp. 2336-2344.
[13] Z. Ren, J. Yuan, J. Meng and Z. Zhang, “Robust part-based hand gesture recognition using kinect sensor,” IEEE Trans. Multimed., vol. 15, no. 5, pp. 1110-1120, Aug. 2013.
[14] Z. Zhang, “Microsoft kinect sensor and its effect,” IEEE Multimed., vol. 19, no. 2, pp. 4-10, Feb. 2012.
[15] H. Cheng, L. Yang and Z. Liu, “Survey on 3D hand gesture recognition,” IEEE Trans. Circuits Syst. Video Techn., vol. 26, no. 9, pp. 1659-1673, Sep. 2016.
[16] P. Molchanov, S. Gupta, K. Kim and K. Pulli, “Short-range FMCW monopulse radar for hand-gesture sensing,” in Proc. IEEE Radar Conf., Arlington, VA, May 2015, pp. 1491-1496.
[17] T. Fan, C. Ma, Z. Gu, Q. Lv, J. Chen, D. Ye, J. Huangfu, Y. Sun, C. Li and L. Ran, “Wireless hand gesture recognition based on continuous-wave Doppler radar sensors,” IEEE Trans. Microw. Theory Techn., vol. 64, no. 11, pp. 4012-4020, Nov. 2016.
[18] Q. Wan, Y. Li, C. Li and R. Pal, “Gesture recognition for smart home applications using portable radar sensors,” in 36th Annual Int. Conf. IEEE Eng. Med. Biol. Soc., Chicago, IL, Aug. 2014, pp. 6414-6417.
[19] J. Lien, N. Gillian, M.-E. Karagozler, P. Amihood, C. Schwesig, E. Olson, H. Raja and I. Poupyrev, “Soli: Ubiquitous gesture sensing with millimeter wave radar,” ACM Trans. Graph., vol. 35, no. 4, Jul. 2016.
[20] S. Wang, J. Song, J. Lien, I. Poupyrev and O. Hilliges, “Interacting with soli: Exploring fine-grained dynamic gesture recognition in the radio-frequency spectrum,” in Proc. 29th ACM Symp. User Interface Softw. Techn., Tokyo, Japan, 2016, pp. 851-860.
[21] Y. Kim and B. Toomajian, “Hand gesture recognition using micro-Doppler signatures with convolutional neural network,” IEEE Access, vol. 4, pp. 7125-7130, Oct. 2016.
[22] Y. Kim and B. Toomajian, “Application of Doppler radar for the recognition of hand gestures using optimized deep convolutional neural networks,” in Prop. 11th Eur. Conf. Antennas Propag. (EUCAP), Paris, Mar. 2017, pp. 1258-1260.
[23] G. Li, S. Zhang, F. Fioranelli and H. Griffiths, “Effect of sparsity-aware time–frequency analysis on dynamic hand gesture classification with radar micro-Doppler signatures,” IET Radar, Sonar Navig., vol. 12, no. 8, pp. 815-820, 2018.
[24] Z. Zhang, Z. Tian and M. Zhou, “Latern: Dynamic continuous hand gesture recognition using FMCW radar sensor,” IEEE Sens. J., vol. 18, no. 8, pp. 3278-3289, Apr. 2018.
[25] D. Lee, H. Yoon and J. Kim, “Continuous gesture recognition by using gesture spotting,” in Proc. 16th Int. Conf. Control, Autom. Syst. (ICCAS), Gyeongju, Oct. 2016, pp. 1496-1498.
[26] M. Elmezain, A. Al-Hamadi and B. Michaelis, “A robust method for hand gesture segmentation and recognition using forward spotting scheme in conditional random fields,” in Proc. 20th Int. Conf. Pattern Recognit., Istanbul, Aug. 2010, pp. 3850-3853.
[27] A. Graves, S. Fernández, F. Gomez and J. Schmidhuber, “Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks,” in Proc. ACM Int. Conf. Mach. Learn., Pittsburgh, 2006, pp. 369-376.
[28] 德希科技, “都卜勒效應,” 壹讀. 2017年. [Online].
Available: https://read01.com/6G6Edak.html#.XuzMTkUzZPY
[29] J. Tu, T. Hwang and J. Lin, “Respiration rate measurement under 1-D body motion using single continuous-wave Doppler radar vital sign detection system,” IEEE Trans. Microw. Theory Techn., vol. 64, no. 6, pp. 1937-1946, Jun. 2016.
[30] R. Fletcher and J. Han, “Low-cost differential front-end for Doppler radar vital sign monitoring,” in IEEE MTT-S Int. Microw. Symp. Dig., Boston, MA, Jun. 2009, pp. 1325-1328.
[31] H. Lee, B.-H. Kim and J.-G. Yook, “Path loss compensation method for multiple target vital sign detection with 24-GHz FMCW radar,” in Proc. IEEE Asia-Pacific Conf. Antennas Propag. (APCAP), Auckland, Aug. 2018, pp. 100-101.
[32] S.-G. Kim, H. Kim, Y. Lee, I.-S. Kho and J.-G. Yook, “5.8 GHz vital signal sensing Doppler radar using isolation-improved branch-line coupler,” in Proc. 3rd Eur. Radar Conf., Manchester, Sep. 2006, pp. 249-252.
[33] F.-K. Wang, M.-C. Tang, Y.-C. Chiu and T.-S. Horng, “Gesture sensing using retransmitted wireless communication signals based on Doppler radar technology,” IEEE Trans. Microw. Theory Techn., vol. 63, no. 12, pp. 4592-4602, Dec. 2015.
[34] M.-C. Tang, F.-K. Wang and T.-S. Horng, “Human gesture sensor using ambient wireless signals based on passive radar technology,” in IEEE MTT-S Int. Microw. Symp. Dig., Phoenix, AZ, May 2015, pp. 1-4.
[35] 邱彥禎,利用行動通訊訊號之都卜勒效應感測手勢與生命徵象,國立中山大學電機工程學系碩士論文,2014。
[36] S. Skaria, A. Al-Hourani, M. Lech and R.-J. Evans, “Hand-gesture recognition using two-antenna Doppler radar with deep convolutional neural networks,” IEEE Sensors J., vol. 19, no. 8, pp. 3041-3048, Apr. 2019.
[37] 周傳期,利用Wi-Fi訊號偵測手勢及深度學習辨識研究,國立中山大學電機工程學系碩士論文,2018。
[38] J.-W. Choi, S.-J. Ryu and J.-H. Kim, “Short-range radar based real-time hand gesture recognition using LSTM encoder,” IEEE Access, vol. 7, pp. 33610-33618, Mar. 2019.
[39] S.-J. Ryu, J.-S. Suh, S.-H. Baek, S. Hong and J.-H. Kim, “Feature-based hand gesture recognition using an FMCW radar and its temporal feature analysis,” IEEE Sensors J., vol. 18, no. 18, pp. 7593-7602, Sep. 2018.
[40] Z. Wang and T. Oates, “Imaging time-series to improve classification and imputation,” in Proc. 24th Int. Conf. Artif. Intell., ser. IJCAI’15. AAAI Press, 2015, pp. 3939–3945.
[41] Louis, “Encoding time series as images,” Medium. 2018年. [Online].
Available:https://medium.com/analytics-vidhya/encoding-time-series-as-images-b043becbdbf3
[42] M. Copeland, “人工智慧、機器學習與深度學習間有什麼區別?,” NVIDIA. 2016年. [Online].
Available:https://blogs.nvidia.com.tw/2016/07/whats-difference-artificial-intelligence-machine-learning-deep-learning-ai/
[43] M. Chen, “機器學習,” OOSGA. [Online].
Available: https://oosga.com/machine-learning/
[44] T. Huang, “什麼是人工智慧、機器學習和深度學習?,” Medium. 2018年. [Online]. Available:https://medium.com/@chih.sheng.huang821/
[45] Ryan, “Training neural networks ,” 知乎. 2017年. [Online].
Available: https://zhuanlan.zhihu.com/p/32154845
[46] Lynn, “機器學習的衰頹興盛:從類神經網路到淺層學習,” STOCKFEEL. 2016年. [Online]. Available:https://www.stockfeel.com.tw/
[47] 林厚勳, “圖解27種神經模型,” TechOrange. 2018年. [Online].
Available:https://buzzorange.com/techorange/2018/01/24/neural-networks-compare/
[48] G. Hinton, L. Deng, D. Yu, G.-E. Dahl, A.-R. Mohamed, N. Jaitly, A. Senior, V. Vanhoucke, P. Nguyen, T.-N. Sainath and B. Kingsbury, “Deep neural networks for acoustic modeling in speech recognition: The shared views of four research groups,” IEEE Signal Proc. Mag., vol. 29, no. 6, pp. 82-97, Nov. 2012.
[49] Brandon Rohrer, “How do convolutional neural networks work?,” Data Science and Robots. 2016年. [Online].
Available:https://brohrer.mcknote.com/zhHant/how_machine_learning_works/how_convolutional_neural_networks_work.html
[50] Y. James, “卷積神經網路介紹,” Medium. 2017年. [Online].
Available: https://medium.com/jameslearningnote/
[51] H. Kulhandjian, P. Sharma, M. Kulhandjian and C. D'Amours, “Sign language gesture recognition using Doppler radar and deep learning,” in Proc. IEEE Globecom Workshops (GC Wkshps), Waikoloa, HI, Dec. 2019, pp. 1-6.
[52] S. Hochreiter and J. Schmidhuber, “Long short-term memory,” Neural
Comput., vol. 9, no. 8, pp. 1735-1780, 1997.
[53] 陳誠, “人人都能看懂的LSTM,” 知乎. 2018年. [Online].
Available: https://zhuanlan.zhihu.com/p/32085405
[54] “K Fold Cross Validation,” Medium. [Online].
Available:http://ethen8181.github.io/machinelearning/model_selection/model_selection.html
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus:永不公開 not available
校外 Off-campus:永不公開 not available

您的 IP(校外) 位址是 3.145.9.200
論文開放下載的時間是 校外不公開

Your IP address is 3.145.9.200
This thesis will be available to you on Indicate off-campus access is not available.

紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 永不公開 not available

QR Code