Responsive image
博碩士論文 etd-0628122-092729 詳細資訊
Title page for etd-0628122-092729
論文名稱
Title
以次級資料輔助機器學習模型於麻醉照會前預測手術後30天死亡率
Predicting postoperative 30 days mortality before anesthesia consultation with secondary data-assisted machine learning models
系所名稱
Department
畢業學年期
Year, semester
語文別
Language
學位類別
Degree
頁數
Number of pages
123
研究生
Author
指導教授
Advisor
召集委員
Convenor
口試委員
Advisory Committee
口試日期
Date of Exam
2022-07-22
繳交日期
Date of Submission
2022-07-28
關鍵字
Keywords
死亡預測、知識庫、機器學習、深度學習、麻醉
mortality prediction, knowledge base, machine learning, deep learning, anesthesia
統計
Statistics
本論文已被瀏覽 537 次,被下載 11
The thesis/dissertation has been browsed 537 times, has been downloaded 11 times.
中文摘要
顧至 2019 年之前的全球死亡個案,整體手術後死亡率占全球死亡率第三位,而提前預測病人死亡在臨床決策與病人癒後高度相關,也因此建構一個準確的手術病人死亡預測模型成為重要課題。近幾年來人工智慧在醫療的應用帶來許多成果,本研究將探討以使用次級資料建構知識庫來加強機器學習預測準確度的方法,並將此方法擴展到各種機器學習模型跟深度學習模型,以及當前廣泛使用的注意力模型架構。同時除了單一醫院資料驗證外,也採用高雄醫學大學醫院之體系其他兩間醫院的資料進行外部資料驗證,並使用夏普利值來進行模型解釋,及細部探討模型權重。本研究的實驗結果當中,可以觀察到那些有使用知識庫融合的模型,無論是傳統機器學習模型還是深度學習模型,在整體測試資料的預測結果以重複抽樣檢定之下,融合知識庫的組別都能顯著增加模型準確度。而在模型解釋方面,則觀察到最高影響的權重依序為:知識庫所提供的術式過往平均死亡率,病患是否有多重共病,及臨床麻醉科醫師用來風險評估的抽血檢驗值。因此本次研究發展了一個準確且可良好解釋的手術前預測模型來預測手術後 30 天內死亡率並僅使用現行手術前可獲得的常規病歷資料。並在透過三間醫院的資料集進行驗證了使用次級統計資料來製作知識庫為一種有效且相對簡單的模型預測準確度的改進方案。
Abstract
Looking back to the global death cases up to 2019, the overall post-operative mortality rate ranks third worldwide, and the early prediction of the patient’s probability of death is highly crucial and correlated with clinical decision-making and the patient’s recovery. Therefore, building up a prediction model to foretell efficiently and accurately the surgical patient’s mortality has become an important topic. In recent years, the application of artificial intelligence to assist medical treatment has brought about helpful development and achievements. This study explores the approach toward using secondary data to build up a knowledge base for enhancing the accuracy of machine learning prediction and to apply the method to various machine learning models and deep learning models, including the currently widely used attention model architecture. In addition to the data validation with Kaohsiung Medical University Hospital, this work also uses the data of its two affiliated hospitals for external data validation. Shapley values are used for model interpretation and for detailed discussion of model weights. As can be observed from the experimental results of this study, models constructed combined with knowledge base, whether it is a traditional machine learning model or a deep learning model, gain more accurately prediction results under the bootstrap method. In terms of model interpretation, the weights with the highest observed effects include: the average mortality rate of the past records of surgeries provided by the knowledge base, possible occurrence of one or more comorbidities, and the blood test used by clinical anesthesiologists for risk assessment value. Overall, this study develops a sufficient and well-interpretable preoperative prediction model to predict accurately the probable mortality within 30 days after surgery with current routine medical records available before surgery.Using secondary statistics to create a knowledge base is an effective and relatively easy method for improving the accuracy of the predictive model. This work has been verified with the datasets of the three hospitals.
目次 Table of Contents
論文審定書 .......................................................................................... i
中文摘要............................................................................................ ii
Abstract............................................................................................. iii
目錄.....................................................................................................v
圖次................................................................................................... ix
表次................................................................................................. xiii
附錄目錄......................................................................................... xiv
第一章. 緒論(Introduction)....................................................1
1.1. 研究背景................................................................................1
1.2. 研究動機................................................................................1
1.3. 研究目的................................................................................2
第二章. 文獻探討(Background and literature discussion)....4
2.1. 死亡率預測臨床文獻探討....................................................4
2.2. 知識庫簡介............................................................................4
2.3. 醫學資料的融合型態............................................................5
2.4. 機器學習(machine learning) .................................................6
2.4.1. 決策樹(decision trees, DT)............................................6
2.4.2. 隨機森林(random forest, RF)........................................7
2.4.3. 梯度提升決策樹(gradient boosting tree, GBT)............7
2.4.4. 直方圖梯度提升決策樹(histogram-based gradient
boosting tree) 8
2.5. 注意力機制(attention mechanism)........................................9
2.5.1. Bahdanau Attention........................................................9
2.5.2. Luong Attention ...........................................................11
2.6. Hyperband 超參數優化.......................................................11
第三章. 研究方法(Research methods)................................13
3.1. 研究架構(study architecture) ..............................................13
3.2. 輸入資料(input data)...........................................................14
3.2.1. 資料來源(data source).................................................14
3.2.2. 納入及排除條件(inclusion and exclusion criteria).....14
3.3. 模型預測終點定義(model endpoint definition) .................14
3.4. 資料型態與變項(model input features) ..............................14
3.5. 資料前處理(data preprocessing).........................................15
3.5.1. 缺失資料值填補(missing data imputation) ................16
3.5.2. 資料標準化(data normalization).................................16
3.5.3. 類別項編碼(encoding categorical features)................16
3.6. 資料分析(data analysis) ......................................................16
3.7. 建立知識庫(knowledge based construction) ......................17
3.8. 資料切分(data splitting) ......................................................18
3.9. 深度學習模型設計(deep learning model design)...............19
3.9.1. 自主注意力模型(self-attention model).......................19
3.9.2. 超參數優化(hyperparameter optimization).................19
3.9.3. 定義損失函數(loss function definition)......................19
3.10. 模型校正(model calibration)...............................................20
3.11. 非平衡資料調整(skewed data adjustment).........................21
3.12. 模型比較(model comparison) .............................................21
3.13. 模型評估方法(metrics) .......................................................21
3.14. 統計分析(statistical analysis)..............................................22
3.15. 模型解釋(model interpretation) ..........................................22
3.16. 次分群分析(subgroup analysis) ..........................................23
3.16.1. 高齡族群探討..............................................................23
3.16.2. 中高手術風險族群探討..............................................23
第四章. 實驗成果(Experiment result).................................25
4.1. 全部資料的探討..................................................................25
4.1.1. 病患資料分析(patient characteristics) ........................25
4.1.2. 資料的分布(data distribution).....................................26
4.1.3. 知識庫的統計資料分析..............................................28
4.1.4. 模型表現(model performance)....................................29
4.1.5. 知識庫模型解釋(model interpretation) ......................36
4.1.6. 知識庫模型的特徵相依解釋......................................39
4.1.7. 預測錯誤的資料分析..................................................43
4.1.8. 實例解釋......................................................................47
4.2. 高齡族群的探討..................................................................48
4.2.1. 高齡族群基本統計資料..............................................48
4.2.2. 高齡族群知識庫分析..................................................49
4.2.3. 高齡族群模型表現......................................................51
4.2.1. 高齡族群知識庫模型解釋..........................................55
4.3. 中高手術風險族群的探討(高於年度術式平均)...............57
4.3.1. 中高風險術式分析(高於年度術式平均)...................57
4.3.2. 中高手術風險族群資料分析(高於年度術式平均)...59
4.3.3. 中高手術風險族群模型分析(高於年度術式平均)...59
4.3.4. 中高手術風險族群知識庫模型解釋(高於年度術式平
均) 64
4.4. 中高手術風險族群的探討(高於 1%死亡率).....................66
4.4.1. 中高風險術式分析(高於 1%死亡率).........................66
4.4.2. 中高手術風險族群資料分析(高於 1%死亡率).........67
4.4.3. 中高手術風險族群模型分析(高於 1%死亡率).........67
4.4.4. 中高手術風險族群知識庫模型解釋(高於 1%死亡率)72
4.5. 討論(discussion) ..................................................................74
第五章. 結論(Conclusion) ...................................................77
5.1. 總結與研究貢獻(research contribution).............................77
5.2. 研究限制(limitations)..........................................................77
5.3. 未來展望(future work) ........................................................78
第六章. 參考文獻(Reference) .............................................80
附錄文件(Appendix)........................................................................87
參考文獻 References
1. Nepogodiev D, Martin J, Biccard B, Makupe A, Bhangu A, National Institute for Health Research Global Health Research Unit on Global S: Global burden of postoperative death. Lancet 2019, 393(10170):401.
2. Glance LG, Lustik SJ, Hannan EL, Osler TM, Mukamel DB, Qian F, Dick AW: The Surgical Mortality Probability Model: derivation and validation of a simple risk prediction rule for noncardiac surgery. Ann Surg 2012, 255(4):696-702.
3. Uribe-Leitz T, Jaramillo J, Maurer L, Fu R, Esquivel MM, Gawande AA, Haynes AB, Weiser TG: Variability in mortality following caesarean delivery, appendectomy, and groin hernia repair in low-income and middle-income countries: a systematic review and analysis of published data. Lancet Glob Health 2016, 4(3):e165-174.
4. Pearse RM, Harrison DA, James P, Watson D, Hinds C, Rhodes A, Grounds RM, Bennett ED: Identification and characterisation of the high-risk surgical population in the United Kingdom. Crit Care 2006, 10(3):R81.
5. Pearse RM, Moreno RP, Bauer P, Pelosi P, Metnitz P, Spies C, Vallet B, Vincent JL, Hoeft A, Rhodes A et al: Mortality after surgery in Europe: a 7 day cohort study. Lancet 2012, 380(9847):1059-1065.
6. Bates DW, Saria S, Ohno-Machado L, Shah A, Escobar G: Big data in health care: using analytics to identify and manage high-risk and high-cost patients. Health Aff (Millwood) 2014, 33(7):1123-1131.
7. Juul S, Kokotovic D, Degett TH, Oreskov JO, Ekeloef S, Gogenur I, Burcharth J: Validation of the preoperative score to predict postoperative mortality (POSPOM) in patients undergoing major emergency abdominal surgery. Eur J Trauma Emerg Surg 2021, 47(6):1721-1727.
8. Chiew CJ, Liu N, Wong TH, Sim YE, Abdullah HR: Utilizing Machine Learning Methods for Preoperative Prediction of Postsurgical Mortality and Intensive Care Unit Admission. Ann Surg 2020, 272(6):1133-1139.
9. Brajer N, Cozzi B, Gao M, Nichols M, Revoir M, Balu S, Futoma J, Bae J, Setji N, Hernandez A et al: Prospective and External Evaluation of a Machine Learning Model to Predict In-Hospital Mortality of Adults at Time of Admission. JAMA Netw Open 2020, 3(2):e1920733.
10. Adibi A, Sadatsafavi M, Ioannidis JPA: Validation and Utility Testing of Clinical Prediction Models: Time to Change the Approach. JAMA 2020,324(3):235-236.
11. Elfanagely O, Toyoda Y, Othman S, Mellia JA, Basta M, Liu T, Kording K, Ungar L, Fischer JP: Machine Learning and Surgical Outcomes Prediction: A Systematic Review. J Surg Res 2021, 264:346-361.
12. Burns ML, Kheterpal S: Machine Learning Comes of Age: Local Impact versus National Generalizability. Anesthesiology 2020, 132(5):939-941.
13. Horvath B, Kloesel B, Todd MM, Cole DJ, Prielipp RC: The Evolution, Current Value, and Future of the American Society of Anesthesiologists Physical Status Classification System. Anesthesiology 2021, 135(5):904-919.
14. Hadorn DC, Draper D, Rogers WH, Keeler EB, Brook RH: Cross-validation performance of mortality prediction models. Stat Med 1992, 11(4):475-489.
15. Towell GG, Shavlik JW: Knowledge-Based Artificial Neural Networks. Artificial Intelligence 1994, 70(1-2):119-165.
16. Vaswani A, Shazeer N, Parmar N, Uszkoreit J, Jones L, Gomez AN, Kaiser Ł, Polosukhin I: Attention is all you need. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, California, USA: Curran Associates Inc.; 2017: 6000–6010.
17. Fritz BA, Chen Y, Murray-Torres TM, Gregory S, Ben Abdallah A, Kronzer A, McKinnon SL, Budelier T, Helsten DL, Wildes TS et al: Using machine learning techniques to develop forecasting algorithms for postoperative complications: protocol for a retrospective study. BMJ Open 2018,8(4):e020124.
18. Marafino BJ, Park M, Davies JM, Thombley R, Luft HS, Sing DC, Kazi DS, DeJong C, Boscardin WJ, Dean ML et al: Validation of Prediction Models for Critical Care Outcomes Using Natural Language Processing of Electronic Health Record Data. JAMA Netw Open 2018, 1(8):e185097.
19. Hyer JM, Ejaz A, Tsilimigras DI, Paredes AZ, Mehta R, Pawlik TM: Novel Machine Learning Approach to Identify Preoperative Risk Factors Associated With Super-Utilization of Medicare Expenditure Following Surgery. JAMA Surg 2019, 154(11):1014-1021.
20. Hill BL, Brown R, Gabel E, Rakocz N, Lee C, Cannesson M, Baldi P, Olde Loohuis L, Johnson R, Jew B et al: An automated machine learning-based model predicts postoperative mortality using readily-extractable preoperative electronic health record data. Br J Anaesth 2019, 123(6):877-886.
21. Fritz BA, Cui Z, Zhang M, He Y, Chen Y, Kronzer A, Ben Abdallah A, King CR, Avidan MS: Deep-learning model for predicting 30-day postoperative mortality. Br J Anaesth 2019, 123(5):688-695.
22. Lee CK, Hofer I, Gabel E, Baldi P, Cannesson M: Development and Validation of a Deep Neural Network Model for Prediction of Postoperative In-hospital Mortality. Anesthesiology 2018, 129(4):649-662.
23. Kristensen SD, Knuuti J, Saraste A, Anker S, Botker HE, Hert SD, Ford I, Gonzalez-Juanatey JR, Gorenek B, Heyndrickx GR et al: 2014 ESC/ESA Guidelines on non-cardiac surgery: cardiovascular assessment and management: The Joint Task Force on non-cardiac surgery: cardiovascular assessment and management of the European Society of Cardiology (ESC) and the European Society of Anaesthesiology (ESA). Eur Heart J 2014, 35(35):2383-2431.
24. Sakaguchi T, Matsumoto K: Development of a Knowledge Based System for Power System Restoration. Ieee Transactions on Power Apparatus and Systems 1983, 102(2):320-329.
25. Schreiber G, Wielinga B, Breuker J: KADS: A principled approach to knowledge-based system development, vol. 11: Academic Press; 1993.
26. Pathak J, Wikner A, Fussell R, Chandra S, Hunt BR, Girvan M, Ott E: Hybrid forecasting of chaotic processes: Using machine learning in conjunction with a knowledge-based model. Chaos 2018, 28(4):041101.
27. Von Rueden L, Mayer S, Garcke J, Bauckhage C, Schuecker J: Informed machine learning–towards a taxonomy of explicit integration of knowledge into machine learning. Learning 2019, 18:19-20.
28. Nilashi M, bin Ibrahim O, Ahmadi H, Shahmoradi L: An analytical method for diseases prediction using machine learning techniques. Computers & Chemical Engineering 2017, 106:212-223.
29. Zhang Y, Li T, Xiao H, Ji W, Guo M, Zeng Z, Zhang J: A knowledge-based approach to automated planning for hepatocellular carcinoma. J Appl Clin Med Phys 2018, 19(1):50-59.
30. Huang SC, Pareek A, Seyyedi S, Banerjee I, Lungren MP: Fusion of medical imaging and electronic health records using deep learning: a systematic review and implementation guidelines. NPJ Digit Med 2020, 3:136.
31. Ray S: A Quick Review of Machine Learning Algorithms. In: 2019 International Conference on Machine Learning, Big Data, Cloud and Parallel Computing (COMITCon): 14-16 Feb. 2019 2019; 2019: 35-39.
32. Breiman L, Friedman JH, Olshen RA, Stone CJ: Classification and Regression Trees. In: 1983; 1983.
33. Breiman L: Random Forests. Mach Learn 2001, 45(1):5-32.
34. Dimitriadis S, Liparas D: How random is the random forest? RF algorithm on the service of structural imaging biomarkers for AD: from ADNI database. Neural Regeneration Research 2018, 13:962-970.
35. Friedman JH: Stochastic gradient boosting. Computational statistics & data analysis 2002, 38(4):367-378.
36. Deng H, Zhou Y, Wang L, Zhang C: Ensemble learning for the early prediction of neonatal jaundice with genetic features. BMC Medical Informatics and Decision Making 2021, 21.
37. Ke G, Meng Q, Finley T, Wang T, Chen W, Ma W, Ye Q, Liu T-Y: Lightgbm: A highly efficient gradient boosting decision tree. Advances in neural information processing systems 2017, 30.
38. Wu T, Zhang W, Jiao X, Guo W, Hamoud YA: Comparison of five Boostingbased models for estimating daily reference evapotranspiration with limited meteorological variables. PLOS ONE 2020, 15(6):e0235324.
39. Shih S-Y, Sun F-K, Lee H-y: Temporal pattern attention for multivariate time series forecasting. Mach Learn 2019, 108(8):1421-1441.
40. Yu R, Zheng Y, Zhang R, Jiang Y, Poon CCY: Using a Multi-Task Recurrent Neural Network With Attention Mechanisms to Predict Hospital Mortality of Patients. IEEE Journal of Biomedical and Health Informatics 2020, 24(2):486-492.
41. Harerimana G, Kim JW, Jang B: A deep attention model to forecast the Length Of Stay and the in-hospital mortality right on admission from ICD codes and demographic data. Journal of Biomedical Informatics 2021, 118:103778.
42. Bahdanau D, Cho K, Bengio Y: Neural machine translation by jointly learning to align and translate. arXiv preprint arXiv:14090473 2014.
43. Sutskever I, Vinyals O, Le QV: Sequence to sequence learning with neural networks. In: Proceedings of the 27th International Conference on Neural Information Processing Systems - Volume 2. Montreal, Canada: MIT Press; 2014:3104–3112.
44. Luong M-T, Pham H, Manning CD: Effective approaches to attention-based neural machine translation. arXiv preprint arXiv:150804025 2015.
45. Feurer M, Hutter F: Hyperparameter Optimization. In: Automated Machine Learning. edn. Edited by Hutter F, Kotthoff L, Vanschoren J. Cham: Springer International Publishing; 2019: 3-33.
46. Yang L, Shami A: On hyperparameter optimization of machine learning algorithms: Theory and practice. Neurocomputing 2020, 415:295-316.
47. Bergstra J, Bardenet R, Bengio Y, Kégl B: Algorithms for hyper-parameter optimization. Advances in neural information processing systems 2011, 24.
48. Snoek J, Larochelle H, Adams RP: Practical bayesian optimization of machine learning algorithms. Advances in neural information processing systems 2012, 25.
49. Li L, Jamieson KG, DeSalvo G, Rostamizadeh A, Talwalkar AS: Hyperband: A Novel Bandit-Based Approach to Hyperparameter Optimization. J Mach Learn Res 2017, 18:185:181-185:152.
50. Wickham H, Averick M, Bryan J, Chang W, McGowan L, François R, Grolemund G, Hayes A, Henry L, Hester J et al: Welcome to the Tidyverse. Journal of Open Source Software 2019, 4(43):1686.
51. Luo W, Phung D, Tran T, Gupta S, Rana S, Karmakar C, Shilton A, Yearwood J, Dimitrova N, Ho TB et al: Guidelines for Developing and Reporting Machine Learning Predictive Models in Biomedical Research: A Multidisciplinary View. J Med Internet Res 2016, 18(12):e323.
52. Fox J, Carvalho MS: The RcmdrPlugin. survival package: Extending the R Commander interface to survival analysis. Journal of Statistical Software 2012, 49:1-32.
53. Wold S, Esbensen K, Geladi P: Principal Component Analysis. Chemometr Intell Lab 1987, 2(1-3):37-52.
54. Abdi H, Williams LJ: Principal component analysis. Wiley interdisciplinary reviews: computational statistics 2010, 2(4):433-459.
55. Bro R, Smilde AK: Principal component analysis. Anal Methods-Uk 2014,6(9):2812-2831.
56. Maaten Lvd, Hinton G: Visualizing data using t-SNE. Journal of machine learning research 2008, 9(Nov):2579-2605.
57. Snoek CG, Worring M, Smeulders AW: Early versus late fusion in semantic video analysis. In: Proceedings of the 13th annual ACM international conference on Multimedia: 2005; 2005: 399-402.
58. Luong T, Pham H, Manning CD: Effective Approaches to Attention-based Neural Machine Translation. In: EMNLP: 2015; 2015.
59. O’Malley T, Bursztein E, Long J, Chollet F, Jin H, Invernizzi L: Keras Tuner. Github[(accessed on 31 January 2021)] 2019.
60. Clevert D-A, Unterthiner T, Hochreiter S: Fast and Accurate Deep Network Learning by Exponential Linear Units (ELUs). arXiv: Learning 2016.
61. Klambauer G, Unterthiner T, Mayr A, Hochreiter S: Self-normalizing neural networks. In: Proceedings of the 31st international conference on neural information processing systems: 2017; 2017: 972-981.
62. Dozat T: Incorporating nesterov momentum into adam. 2016.
63. Rosenfeld N, Meshi O, Tarlow D, Globerson A: Learning structured models with the AUC loss and its generalizations. In: Artificial Intelligence and Statistics: 2014: PMLR; 2014: 841-849.
64. Cortes C, Mohri M: AUC optimization vs. error rate minimization. In: Proceedings of the 16th International Conference on Neural Information Processing Systems. Whistler, British Columbia, Canada: MIT Press; 2003: 313–320.
65. Yan L, Dodier R, Mozer M, Wolniewicz R: Optimizing Classifier Performance via an Approximation to the Wilcoxon-Mann-Whitney Statistic; 2003.
66. Ng AY: Feature selection, L1 vs. L2 regularization, and rotational invariance. In: Twenty-first international conference on Machine learning -ICML '04. Banff, Alberta, Canada: Association for Computing Machinery; 2004:78.
67. Ioffe S, Szegedy C: Batch normalization: accelerating deep network training by reducing internal covariate shift. In: Proceedings of the 32nd International Conference on International Conference on Machine Learning - Volume 37. Lille, France: JMLR.org; 2015: 448–456.
68. Lemaître G, Nogueira F, Aridas C: Imbalanced-learn: A Python Toolbox to Tackle the Curse of Imbalanced Datasets in Machine Learning. 2016, 18.
69. Hanley JA, McNeil BJ: The meaning and use of the area under a receiver operating characteristic (ROC) curve. Radiology 1982, 143(1):29-36.
70. Calders T, Jaroszewicz S: Efficient AUC Optimization for Classification. In:2007; Berlin, Heidelberg: Springer Berlin Heidelberg; 2007: 42-53.
71. Ruopp MD, Perkins NJ, Whitcomb BW, Schisterman EF: Youden Index and optimal cut-point estimated from observations affected by a lower limit of detection. Biom J 2008, 50(3):419-430.
72. Efron B, Tibshirani RJ: An introduction to the bootstrap: CRC press; 1994.
73. Murdoch WJ, Singh C, Kumbier K, Abbasi-Asl R, Yu B: Definitions, methods, and applications in interpretable machine learning. Proceedings of the National Academy of Sciences 2019, 116(44):22071-22080.
74. Ribeiro MT, Singh S, Guestrin C: "Why Should I Trust You?". In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. San Francisco, California, USA: Association for Computing Machinery; 2016: 1135-1144.
75. Lundberg SM, Lee S-I: A unified approach to interpreting model predictions. In: Proceedings of the 31st International Conference on Neural Information Processing Systems. Long Beach, California, USA: Curran Associates Inc.; 2017:4768–4777.
76. Alvarez-Melis D, Jaakkola T: On the Robustness of Interpretability Methods. ArXiv 2018, abs/1806.08049.
77. Sundararajan M, Najmi A: The Many Shapley Values for Model Explanation. In: Proceedings of the 37th International Conference on Machine Learning. Edited by Hal D, III, Aarti S, vol. 119. Proceedings of Machine Learning Research: PMLR; 2020: 9269--9278.
78. Fryer D, Strumke I, Nguyen H: Shapley Values for Feature Selection: The Good, the Bad, and the Axioms. Ieee Access 2021, 9:144352-144360.
79. Lundberg SM, Erion G, Chen H, DeGrave A, Prutkin JM, Nair B, Katz R, Himmelfarb J, Bansal N, Lee SI: From Local Explanations to Global Understanding with Explainable AI for Trees. Nat Mach Intell 2020, 2(1):56-67.
80. group PO-S: Peri-interventional outcome study in the elderly in Europe: A 30-day prospective cohort study. Eur J Anaesthesiol 2022, 39(3):198-209.
81. Kristensen SD, Knuuti J, Saraste A, Anker S, Bøtker HE, Hert SD, Ford I, Gonzalez-Juanatey JR, Gorenek B, Heyndrickx GR et al: 2014 ESC/ESA Guidelines on non-cardiac surgery: cardiovascular assessment and management: The Joint Task Force on non-cardiac surgery: cardiovascular assessment and management of the European Society of Cardiology (ESC) and the European Society of Anaesthesiology (ESA). Eur Heart J 2014, 35(35):2383-2431.
82. Hofer IS, Lee C, Gabel E, Baldi P, Cannesson M: Development and validation of a deep neural network model to predict postoperative mortality, acute kidney injury, and reintubation using a single feature set. NPJ Digit Med 2020, 3(1):58.
83. Bonde A, Varadarajan KM, Bonde N, Troelsen A, Muratoglu OK, Malchau H, Yang AD, Alam H, Sillesen M: Assessing the utility of deep neural networks in predicting postoperative surgical complications: a retrospective study. Lancet Digit Health 2021, 3(8):e471-e485.
84. Shwartz-Ziv R, Armon A: Tabular data: Deep learning is not all you need. Information Fusion 2022, 81:84-90.
85. Wessler BS, Nelson J, Park JG, McGinnes H, Gulati G, Brazil R, Van Calster B, van Klaveren D, Venema E, Steyerberg E et al: External Validations of Cardiovascular Clinical Prediction Models: A Large-Scale Review of the Literature. Circ Cardiovasc Qual Outcomes 2021, 14(8):e007858.
86. McMahan B, Moore E, Ramage D, Hampson S, y Arcas BA: Communicationefficient learning of deep networks from decentralized data. In: Artificial
intelligence and statistics: 2017: PMLR; 2017: 1273-1282.
87. Konečný J, McMahan HB, Yu FX, Richtárik P, Suresh AT, Bacon D: Federated learning: Strategies for improving communication efficiency. arXiv preprint arXiv:161005492 2016.
電子全文 Fulltext
本電子全文僅授權使用者為學術研究之目的,進行個人非營利性質之檢索、閱讀、列印。請遵守中華民國著作權法之相關規定,切勿任意重製、散佈、改作、轉貼、播送,以免觸法。
論文使用權限 Thesis access permission:自定論文開放時間 user define
開放時間 Available:
校內 Campus: 已公開 available
校外 Off-campus: 已公開 available


紙本論文 Printed copies
紙本論文的公開資訊在102學年度以後相對較為完整。如果需要查詢101學年度以前的紙本論文公開資訊,請聯繫圖資處紙本論文服務櫃台。如有不便之處敬請見諒。
開放時間 available 已公開 available

QR Code