姓名劉志峰 (Chih-Feng Liu) 電子郵件信箱E-mail 資料不公開 畢業系所電機工程學系研究所(Electrical Engineering) 畢業學位博士(Ph.D.) 畢業時期101學年第1學期 論文名稱(中)神經模糊建模技術在預測中的應用 論文名稱(英)Application of Neuro-Fuzzy Modeling in Prediction 檔案本電子全文僅授權使用者為學術研究之目的，進行個人非營利性質之檢索、閱讀、列印。

etd-0121113-174802.pdf

請遵守中華民國著作權法之相關規定，切勿任意重製、散佈、改作、轉貼、播送，以免觸法。論文使用權限

紙本論文：立即公開

電子論文：使用者自訂權限：校內 5 年後、校外 5 年後公開論文語文/頁數中文/122 統計本論文已被瀏覽 5635 次，被下載 499 次 摘要(中)本研究以神經模糊建模的技術來進行股票預測，當蒐集到的預測資料龐大時，有效將資料簡化便成為一項重要工作。所以本研究提出以相似度為基礎的資料縮減演算法，以減少監督式學習的訓練集規模。訓練樣本逐一輸入演算法後，透過相似度測試分組成一群，每一群的統計平均值皆視為代表該群中所有樣本的原型。接著，這些平均值的集合可用來取代原有訓練集，從而縮減日後進行監督式學習時所使用的訓練集。此一方法具有每群中所含資料的分佈獲得統計上的詳細說明，所獲得的每個原型資料都是相應群中所含樣本的良好代表，它會根據原始訓練樣本之間的相似度關係和分佈，自動萃取不同的代表數等優點。此外本研究提出的方法可以有效地同時應用於迴歸問題和分類問題，實驗結果顯示，本研究提出的方法比其他資料縮減方法更有效。

此外，本研究將一組給定的訓練資料為依據，提出第Ⅱ型神經模糊建模技術在股票價格預測上的應用。自我建構式之分群方法可以自動產生第Ⅱ型模糊規則，所得出的第Ⅱ型模糊規則由混合式學習演算法修正，透過輸入相似度測試和輸出相似度測試將給定的訓練資料集分成群集，來自每個群集的第Ⅱ型TSK規則便形成一個模糊規則庫，接著以粒子群優演算法和最小平方估計法來修正與這些規則有關的前鑑部份和後鑑部分參數。執行時取自台灣股票加權指數(TAIEX)和納斯達克(NASDAQ)指數的幾個資料集所得出的實驗結果，證實第Ⅱ型神經模糊系統建模的方法在股票價格預測方面具有成效。摘要(英)We propose a similarity-based prototype reduction algorithm to reduce the training set size for supervised learning. Training patterns are input to the algorithm one by one and grouped into blobs through similarity tests. The statistical mean of each blob is regarded as a prototype representing all the patterns included in the blob. The collection of such means can then be used to substitute the original training set, and, consequently, the training set for later supervised learning is reduced. This approach has several advantages. The distribution of the data contained in each blob is statistically well described. Each obtained prototype is a good representative of the patterns included in the corresponding blob. Different numbers of representatives are extracted auto- matically according to the similarity relationship among and the distribution of the original training patterns. Furthermore, our method can be applied efficiently to both regression and classification problems. Experimental results show that the proposed method performs more effectively than other prototype reduction methods.

Moreover, we present an application of type-2 neuro-fuzzy modeling to stock price prediction based on a given set of training data. Type-2 fuzzy rules can be generated automatically by a self-constructing clustering method and the obtained type-2 fuzzy rules cab be refined by a hybrid learning algorithm. The given training data set is partitioned into clusters through input-similarity and output-similarity tests, and a type-2 TSK rule is derived from each cluster to form a fuzzy rule base. Then the antecedent and consequent parameters associated with the rules are refined by particle swarm optimization and least squares estimation. Experimental results, obtained by running on several datasets taken from TAIEX and NASDAQ, demonstrate the effectiveness of the type-2 neuro-fuzzy modeling approach in stock price prediction.關鍵字(中)資料縮減方法 第Ⅱ型模糊集 TSK規則 自建構式模糊分群 最小平方估計法 粒子群優演算法 關鍵字(英)prototype reduction approach type-2 fuzzy set TSK rule self-constructing fuzzy clustering least squares estimation PSO 論文目次中文摘要 Ⅲ

英文摘要 Ⅳ

誌謝 Ⅴ

目錄 Ⅵ

圖目錄 Ⅸ

表目錄 Ⅹ

第 一 章緒論 1

1.1研究背景與動機 1

1.1.1巨量資料之處理 1

1.1.2股票預測 2

1.2研究目的及內容 4

1.2.1資料預處理 4

1.2.2模糊建模技術於股票預測之應用 5

1.3論文架構 5

第 二 章股票預測介紹 6

2.1預測基礎 6

2.2股票基礎常識 8

2.3股票預測方法 9

第 三 章支持向量機與軟式計算 11

3.1支持向量回歸簡介 11

3.1.1支持向量機 11

3.1.2持向量機分類的數學原理 13

3.1.3基於線性規劃的SVM分類 15

3.1.4支援向量回歸SVR模型 16

3.1.5支持向量機分類與支持向量回歸的關係 19

3.2神經模糊系統 20

3.2.1神經網路 20

3.2.2監督式學習網路 21

3.2.3非監督式學習網路 22

3.2.4模糊理論 22

3.2.5神經模糊網路 25

3.3第Ⅱ型模糊系統概述 28

3.3.1第II型模糊集的基本概念 28

3.3.2第II型模糊集的基本運算 29

3.3.3第II型模糊系統組成 30

第 四 章系統建模技術與應用 37

4.1資料處理分析 37

4.1.1問題定義 37

4.1.2相關研究 38

4.1.3資料縮減方法 40

4.1.4實例 47

4.2第Ⅱ型神經模糊系統建模技術 53

4.2.1基本理論 53

4.2.2規則庫建構 57

4.2.3參數修正 62

4.2.4實例 67

第 五 章實驗與結果 70

5.1資料縮減方法 70

5.1.1實驗一:三個分類資料集的比較 70

5.1.2實驗二:兩個迴歸資料集的比較 75

5.1.3實驗三: SBPR不同設定值的影響 76

5.2第Ⅱ型神經模糊建模法對股價的預測 81

5.2.1實驗一:TAIEX資料預測 81

5.2.2實驗二: TAIEX和TAIFEX資料預測 86

5.2.3實驗三: TAIEX、DJTA和NASDAQ資料預測 89

5.2.4實驗四: T2NFS與SVM方法比較 94

5.2.5實驗五:誤差測量 95

第 六 章結論與未來研究方向 97

6.1結論 97

6.2未來研究方向 100

參考文獻 101參考文獻[1] M. F. Hassan and K. Boukas, “Multilevel technique for large scale lqr with time- delays and systems constraints,” International Journal of Innovative Computing, Information and Control, vol. 3, no. 2, pp. 419–434, April 2007.

[2] S. Tong, W. Wang, and L. Qu, “Decentralized robust control for uncertain t-s fuzzy largescale systems with time-delay,” International Journal of Innovative Computing, Information and Control, vol. 3, no. 3, pp. 657–672, June 2007.

[3] L. Luoh, “Control design of t-s fuzzy large-scale systems,” International Journal of Innovative Computing, Information and Control, vol. 5, no. 9, pp. 2869–2880, September 2009.

[4] G. Feng, G. B. Huang, Q. Lin, and R. Gay, “Error minimized extreme learning machine with growth of hidden nodes and incremental learning,” IEEE Trans- actions on Neural Networks, vol. 20, no. 8, pp. 1352–1357, July 2009.

[5] W. Lam, C. K. Keung, and C. X. Ling, “Learning good prototypes for classific- ation using filtering and abstraction of instances,” Pattern Recognition, vol. 35, no. 7, pp. 1491–1506, July 2002.

[6] D. L. Wilson, “Asymptotic properties of nearest neighbor rules using edited data,” IEEE Transactions on Systems, Man, and Cybernetics, vol. 2, no. 3, pp. 408–421, July 1972.

[7] G. W. Gates, “The reduced nearest neighbor rule,” IEEE Transactions on Inform- ation Theory, vol. 18, no. 3, pp. 431–433, May 1972.

[8] G. L. Ritter, H. B.Woodruff, S. R. Lowry, and T. L. Isenhour, “An algorithm for a selective nearest neighbor decision rule,” IEEE Transactions on Information Theory, vol. 21, no. 6, pp. 665–669, November 1975.

[9] D. R. Wilson and T. R. Martinez, “Reduction techniques for instance-based learning algorithms,” Machine Learning, vol. 38, no. 3, pp. 257–286, March 2000.

[10] J. S’anchez, “High training set size reduction by space partitioning and prototype abstraction,” Pattern Recognition, vol. 37, no. 7, pp. 1561–1564, July 2004.

[11] S.-W. Kim and B. J. Oommen, “Enhancing prototype reduction schemes with recursion: A method applicable for “large” data sets,” IEEE Transactions on Systems, Man, and Cybernetics, part B: Cybernetics, vol. 34, no. 3, pp. 1384–1397, June 2004.

[12] M. Lozano, J. M. Sotoca, J. S. S’anchez, F. Pla, E.P. ekalska, and R. P. W. Duin, “Experimental study on prototype optimisation algorithms for prototype- based classification in vector spaces,” Pattern Recognition, vol. 39, no. 10, pp. 1827-1838, October 2006.

[13] E. Marchiori, “Hit miss networks with applications to instance selection,” Journal of Machine Learning Research, vol. 9, pp. 997–1017, June 2008.

[14] C. Cortes and V. Vapnik, “Support-vector networks,” Machine Learning, vol. 20, no. 3, pp. 273–297, September 1995.

[15] V. Vapnik, The nature of statistical learning theory, 2nd ed. New York, NY,USA, Springer, November 1999.

[16] K.M. Lin and C.J. Lin, “A study on reduced support vector machines,” IEEE Transactions on Neural Networks, vol. 14, no. 6, pp. 1449–1559, November 2003.

[17] J. G. Wang, P. Neskovic, and L. N. Cooper, “Training data selection for support vector machines,” in Proceedings of the 1st International Conference on Advances in Natural Computation, pp. 554-564, August 2005.

[18] E. P, ekalska, R. P. W. Duin, and P. Paclik, “Prototype selection for dissimilarity -based classifiers,” Pattern Recognition, vol. 39, no. 2, pp. 189-208, February 2006.

[19] Y.-J. Lee and S.-Y. Huang, “Reduced support vector machines: A statistical theory, ” IEEE Transactions on Neural Networks, vol. 18, no. 1, pp. 1-13, January 2007.

[20] H. Brighton and C. Mellish, “On the consistency of information filters for lazy learning algorithms,” in Proceedings of the Third European Conference on Principles of Data Mining and Knowledge Discovery, pp. 283–288, September 1999.

[21] P. Datta and D. Kibler, “Symbolic nearest mean classifiers,” in Proceedings of the 14th International Conference on Machine Learning, pp. 82–87, July 1997.

[22] J. Macqueen, “Some methods for classification and analysis of multivariate observations,” in Proceedings of the 5th Berkeley Symposium on Mathematical Statistics and Probability, vol. 1, pp. 281–297, January 1966.

[23] Y. Linde, A. Buzo, and R. Gray, “An algorithm for vector quantizer design,” IEEE Transactions on Communications, vol. 28, no. 1, pp. 84–95, January 1980.

[24] K. Tabata, M. Sato, and M. Kudo, “Data compression by volume prototypes for streaming data,” Pattern Recognition, vol. 43, no. 9, pp. 3162–3176, September 2010.

[25] D. G. Champernowne, “ Sampling theory applied to autoregressive schemes,” Journal of the Royal Statistical Society: Series B, vol. 10, pp. 204-231, 1948.

[26] G. E. P. Box, G. M. Jenkins, “Time series analysis: forecasting and control,” 3rd Edition, Prentice Hall, Englewood Cliffs, 1994.

[27] R. F. Engle, “Autoregressive conditional heteroscedasticity with estimates of the variance of united kingdom inflation,” Econometrica, vol. 50, no. 4, pp. 987-1008, July 1982.

[28] T. Bollerslev, “Generalized autoregressive conditional heteroscedasticity,” Journal of Econometrics, vol. 31, no. 3, pp. 307-327, April 1986.

[29] K.H. Huarng, T. H. K. Yu, and Y. W. Hsu, “A multivariate heuristic model for fuzzy time-series forecasting,” IEEE Transactions on Systems, Man, and Cybernetics, Part B: Cybernetics, vol. 37, no. 1, pp. 836-846, August 2007.

[30] H. K. Yu, “Weighted fuzzy time-series models for TAIEX forecasting,” Physica A: Statistical Mechanics and its Applications, vol. 34, no. 4, pp. 2945-2952, April 2005.

[31] S. M. Chen, C. D. Chen, “TAIEX forecasting based on fuzzy time series and fuzzy variation groups,” IEEE Transactions on Fuzzy Systems, vol. 19, no. 1, pp. 1-12, February 2011.

[32] Y. Yoon, J. George Swales, and T. M. Margavio, “A comparison of discriminant analysis versus artificial neural networks,” The Journal of the Operational Research Society, vol. 44, no. 1, pp. 51-60, January 1993.

[33] E. W. Saad, D. V. Prokhorov, I. Donald, and C. Wunsch, “Comparative study of stock trend prediction using time delay, recurrent and probabilistic neural networks,” IEEE Transactions on Neural Networks, vol. 9, no. 6, pp. 1456-1470, November 1998.

[34] J. C. Principe, N. R. Euliano, and W. C. Lefebvre, “Neural and Adaptive Systems: Fundamentals through Simulations,” John Wiley & Sons, New York, USA, 1999.

[35] K. J. Kim, and I. Han, “Genetic algorithms approach to feature discretization in artificial neural networks for the prediction of stock price index,” Expert Systems with Applications, vol. 19, no. 2, pp. 125-132, August 2000.

[36] M. Ghiassi, and H. Saidane, “A dynamic architecture for artifucial neural networks,” Neurocomputing, vol. 63, pp. 397-413, January 2005.

[37] K. Huarng, and T. H.K. Yu, “The application of neural networks to forecast fuzzy time series,” Physica A: Statistical Mechanics and its Applications, vol. 363, no. 2, pp. 481-491, May 2006.

[38] Y.K. Kwon, and B.R. Moon, “A hybrid neurogenetic approach for stock forecasting,” IEEE Transactions on Neural Networks, vol. 18, no. 3, pp. 851-864, May 2007.

[39] H.J. Kim, and K.S. Shin, “A hybrid approach based on neural networks and genetic algorithms for detecting temporal patterns in stock markets,” Applied Soft Comput -ing, vol. 7, no. 2, pp. 569-576, March 2007.

[40] T.J. Hsieh, H.F. Hsiao, and W.C. Yeh, “Forecasting stock markets using wavelet transforms and recurrent neural networks: An integrated system based on artificial bee colony algorithm,” Applied Soft Computing , vol. 11, no. 2, pp. 2510-2525, March 2011.

[41] M. Khashei, and M. Bijari, “A novel hybridization of artificial neural networks and arima models for time series forecasting,” Applied Soft Computing, vol. 11, no. 2, pp. 2664-2675, March 2011.

[42] R. J. Kuo, C. H. Chen, and Y. C. Hwang, “An intelligent stock trading decision support system through integration of genetic algorithm based fuzzy neural network and artificial neural network,” Fuzzy Sets and Systems, vol. 118, no. 1, pp. 21-45, February 2001.

[43] P.C. Chang, and C.H. Liu, “A TSK type fuzzy rule based system for stock price prediction,” Expert Systems with Applications, vol. 34, no. 1, pp. 135-144, January 2008.

[44] M. H. F. Zarandi, B. Rezaee, I. B. Turksen, and E. Neshat, “A type-2 fuzzy rule- based expert system model for stock price analysis,” Expert Systems with Applications, vol. 36, no. 1, pp. 139-154, January 2009.

[45] T. V. Gestel, J. A. K. Suykens, D. E. Baestaens, A. Lambrechts, G. Lanckriet, B. Vandaele, B. D. Moor, and J. Vandewalle, “Financial time series prediction using least squares support vector machines within the evidence framework,” IEEE Transactions on Neural Networks, vol. 12, no. 4, pp. 809-821, July 2001.

[46] L. Cao, and F. E. H. Tay, “Support vector machine with adaptive parameters in financial time series forecasting,” IEEE Transactions on Neural Networks, vol. 14, no. 6, pp. 1505-1518, November 2003.

[47] G. Valeriy, and B. Supriya, “Support vector machine as an efficient framework for stock market volatility forecasting,” Computational Management Science, vol. 3, no. 2,pp. 147-160, February 2006.

[48] C.Y. Yeh, C.W. Huang, and S.J. Lee, “A multiple-kernel support vector regression approach for stock market price forecasting,” Expert Systems with Applications, vol. 38, no. 3, pp. 2177-2186, March 2011.

[49] A. Sfetsos, and C. Siriopoulos, “Time series forecasting with a hybrid clustering scheme and pattern recognition,” IEEE Transactions on Systems, Man and Cybernetics, Part A: Systems and Humans, vol. 34, no. 3, pp. 399-405, May 2004.

[50] L. Aburto, and R. Weber, “Improved supply chain management based on hybrid demand forecasts,” Applied Soft Computing, vol. 7, no. 1, pp. 136-144, January 2007.

[51] S. F. Crone, and N. Kourentzes, “Feature selection for time series prediction - a combined filter and wrapper approach for neural networks,” Neurocomputing , vol. 73, no. 10-12, pp. 1923-1936, March 2010.

[52] J. A. Guajardo, R. Weber, and J. Miranda, “A model updating strategy for predicting time series with seasonal patterns,” Applied Soft Computing, vol. 10, no. 1, pp. 276-283, August 2010.

[53] F. Zhai, Q. Wen, Z. Yang, and Y. Song, “Hybrid forecasting model research on stock data mining,” in:Proceedings of the 4th International Conference on New Trends in Information Science and Service Science, Gyeongju, Korea, pp. 630-633, May 2010.

[54] National Association of Securities Dealers Automated Quotations. URL http://www.nasdaq.com/

[55] Dow Jones Indexes. URL http://www.djindexes.com

[56] Central Bank of the Republic of China. URL http://www.cbc.gov.tw/

[57] Taiwan Stock Exchange Corporation. URL http://www.twse.com.tw/

[58] J. W. Hall, “Adaptive selection of US stocks,” in G. J. Deboeck (Ed.), Trading on the Edge: Neural, Genetic, and Fuzzy Systems for Chaotic Financial Markets, Wiley, New York, pp. 45-65, 1994.

[59] J. M. Mendel, “UNCERTAIN Rule-Based Fuzzy Logic Systems: Introduction and New Directions,” Prentice Hall PTR, Upper Saddle River, NJ, USA, 2001.

[60] M. Zarandi, E. Neshar, I. Turksen, and B. Rezaee, “A type-2 fuzzy model for stock market analysis,” in Proceedings of IEEE International Conference on Fuzzy Systems, London, pp. 1-6, July 2007.

[61] M. G. Ajao, and E. P. Oseyomon, “The predictive content of some leading economic indicators on stock prices,” Journal of Research in National Development , vol. 8, no. 1, June 2010.

[62] L. A. Zadeh, “The concept of a linguistic variable and its application to approxi- mate reasoning-I,” Information Sciences, vol. 8, no. 3, pp. 199-249, 1975.

[63] R. I. John, P. R. Innocent, and M. R. Barnes, “Neuro-fuzzy clustering of radio-graphic tibia image data using type-2 fuzzy sets,” Information Sciences , vol. 125, no. 1-4, pp. 65-82, June 2000.

[64] O. Mendoza, P. Melin, and O. Castillo, “Interval type-2 fuzzy logic and modular neural networks for face recognition applications original research article,” Applied Soft Computing , vol. 9, no. 4, pp. 1377-1387, September 2009.

[65] M. Singh, S. Srivastava, M. Hanmandlu, and J. Gupta, “Type-2 fuzzy wavelet networks (T2FWN) for system identification using fuzzy differential and lyapunov stability algorithm,” Applied Soft Computing, vol. 9, no. 3, pp. 977- 989, June 2009.

[66] M. F. Zarandi, M. Zarinbal, and M. Izadi, “Systematic image processing for diagno- sing brain tumors: A type-II fuzzy expert system approach,” Applied Soft Computing , vol. 11, no. 1, pp. 285-294, January 2011.

[67] S.J. Lee, and C.S. Ouyang, “A neuro-fuzzy system modeling with self-constructing rule generation and hybrid SVD-based learning,” IEEE Transactions on Fuzzy Systems, vol. 11, no.3, pp. 341-353, June 2003.

[68] E. Guresen, G. Kayakutlu, and T. U. Daim, “Using artificial neural network models in stock market index prediction,” Expert Systems with Applications, vol. 38, no. 8, pp. 10389-10397, August 2011.

[69] M. B. de Almeida, A. de Padua Braga, and J. P. Braga, “SVM-KM: speeding SVMs learning with a priori cluster selection and k-means,” in Proceedings of the 6th Brazilian Symposium on Neural Networks, pp. 162-167, November 2000.

[70] N. Cristianini and J. Shawe-Taylor, “An Introduction to Support Vector Machines and other kernel-based learning methods.” New York, NY, USA: Cambridge University Press, March 2000.

[71] A.J.Smola, and B.Scholkopf , “A tutorial on support vector regression,” Statistics and Computing, Vol. 14, no. 3, pp. 199-222, August 2004.

[72] G. J. Klir and B. Yuan, “Fuzzy sets and fuzzy logic: theory and applications,” 1st ed. Upper Saddle River, NJ, USA: Prentice Hall PTR, May 1995.

[73] Hagan, Martin, “Neural Network Design.” PWS Publishing Company, 1996.

[74] J.S. R. Jang, C. T.Sun, E. Mizutani, “Neuro-Fuzzy and Soft Computing: A Computational Approach to Learning and Machine Intelligence,” Prentice Hall, September 1997.

[75] N. N. Karnik, and J. M. Mendel, “Centroid of a type-2 fuzzy set,” Information Sciences, vol. 132, no. 1-4, pp. 195-220, February 2001.

[76] F. Liu, “An efficient centroid type-reduction strategy for general type-2 fuzzy logic system,” Information Sciences, vol. 179, no. 9, pp. 2224-2236, May 2008.

[77] D. Wu, J. M. Mendel, “Enhanced karnik-mendel algorithms,” IEEE Transactions on Fuzzy Systems, vol. 17, no. 4, pp. 923-934, August 2009.

[78] C. Y. Yeh, W. H. R. Jeng, S. J. Lee, “An enhanced type-reduction algorithm for type-2 fuzzy sets,” IEEE Transactions on Fuzzy Systems, vol. 19, no. 2, pp. 227- 240, April 2011.

[79] C. Wagner, and H. Hagras, “zSlices based general type-2 FLC for the control of autonomous mobile robots in real world environments,” in Proceedings of the 18th IEEE international conference on Fuzzy Systems, Jeju Jeju Island, Korea, pp. 718-725, August 2009.

[80] C.Y. Yeh, “Type-2 neuro-fuzzy system modeling with hybrid learning algorithm,” Ph.D. thesis, National Sun Yat-sen University, Kaohsiung, Taiwan, June 2011.

[81] B. S. Suryavanshi, N. Shiri, and S. P. Mudur, “Incremental relational fuzzy subtractive clustering for dynamic web usage profiling,” in Proceedings of the 7th Workshop on Knowledge Discovery in the Web, Chicago, USA, pp. 21-24, August 2005.

[82] B. Mojtaba, and M. Reza, “A new hybrid recommender system using dynamic fuzzy clustering,” in Proceedings of the 3rd International Conference on Information and Knowledge Technology, Mashad, Iran, November 2007.

[83] O. M. Alia, R. Mandava, D. Ramachandram, and M. E. Aziz, “Dynamic fuzzy clustering using harmony search with application to image segmentation,” in: Proceedings of the 2009 IEEE International Symposium on Signal Processing and Information Technology, Ajman, UAE, pp. 538-543, December 2009.

[84] Y.T. Peng, C.Y. Yeh, and S.J. Lee, “A hierarchical SVD-based least squares method for parameter estimation,” in Proceedings of the International Confer- ence on Data Engineering and Internet Technology, Bali, Indonesia, pp. 219-222, March 2011.

[85] G. J. Klir, and B. Yuan, “Fuzzy Set and Fuzzy logic,” Prentice Hall PTR, 1995.

[86] J. Kennedy, and R. C. Eberhart, “Particle swarm optimization,” in Proceedings of the IEEE International Conference on Neural Networks, Perth, WA, Australia, pp. 1942-1948, December 1995.

[87] S. Zheng, X. Lu, N. Zheng, and W. Xu, “Unsupervised clustering based reduced support vector machines,” in Proceedings of the IEEE International Conference on Acoustics, Speech, and Signal Processing, pp. 821-824, April 2003.

[88] R. Koggalage, and S. K. Halgamuge, “Reducing the number of training samples for fast support vector machine classification,” Neural Information Processing Letters and Reviews, vol. 2, no. 3, pp. 57-65, March 2004.

[89] D. Michie, D. J. Spiegelhalter, and C. C. Taylor, “Machine Learning, Neural and Statistical Classification.” Englewood Cliffs, N.J.: Prentice Hall, 1994. Data available at http://www.ncc.up.pt/liacc/ML/statlog/datasets.html

[90] S. J. Stolfo, W. Fan, and W. Lee, “A. Prodromidis and P. K. Chan, Cost-based modeling for fraud and intrusion detection: Results from the jam project,” in In Proceedings of the 2000 DARPA Information Survivability Conference and Exposition, pp. 130-144, January 2000.

[91] Statlib–datasets archive. Available: http://lib.stat.cmu.edu/datasets/

[92] Delve–datasets. Available:http://www.cs.toronto.edu/~mlearn/delve/data/data sets.

[93] Taiwan Economic Journal Corporation Limited. URL http://www.tej.com.tw/

[94] Taiwan Futures Exchange Corporation. URL http://www.taifex.com.tw/

[95] K. Pearson, “On lines and planes of closest fit to systems of points in space,” Philosophical Magazine, vol. 2, no. 6, pp. 559-572, 1901.

[96] J. Y. Jiang, R. J. Liou, and S. J. Lee, “A fuzzy self-constructing feature cluster- ing algorithm for text classification,” IEEE Transactions on Knowledge and Data Engineering, vol. 23, no. 3, pp. 335-349, March 2011.

[97] Neural Network Forecasting. URL http://www.neural-forecasting.com/口試委員周裕達 - 召集委員

侯俊良 - 委員

吳志宏 - 委員

歐陽振森 - 委員

蔡賢亮 - 委員

蔡賢亮 - 委員

鍾澍強 - 委員

李錫智 - 指導教授

口試日期2012-12-19 繳交日期2013-01-21