يك روش يادگيري جديد براي افزايش كارايي سيستم طبقهبندي مركب
محورهای موضوعی : مهندسی برق و کامپیوترسیدحسن نبوی کریزی 1 , احساناله کبیر 2
1 - دانشگاه تربیت مدرس
2 - دانشگاه تربیت مدرس
کلید واژه: طبقهبندهايادگيري دستهجمعيگوناگوني در خطاهمبستگي منفيشبكه عصبي,
چکیده مقاله :
يادگيري دستهجمعي يک رويکرد مؤثر در يادگيري ماشيني است كه به منظور بهبود كارايي سيستم بازشناسي الگو استفاده ميشود. براي آنكه اين نوع يادگيري مفيد واقع شود بايد خطاهاي طبقهبندهاي پايه با يكديگر متفاوت باشند. راهكارهاي ايجاد تفاوت در خطا، به دو گروه ضمني و صريح تقسيم ميشوند. در اين تحقيق روش جديدي از نوع صريح براي ايجاد گوناگوني در طبقهبندهاي يك سيستم مركب ارائه ميشود. در اين روش، معيار تازهاي از گوناگوني در فرايند يادگيري سيستم مركب به كار گرفته ميشود. در روش پيشنهادي، شباهت بين خطاي هر طبقهبند با طبقهبندهاي ديگر به صورت يك مؤلفه در تابع خطاي آن طبقهبند منظور شده و در الگوريتم يادگيري آن ايفاي نقش ميكند. نتايج آزمايشهاي ما بر روي چند مجموعه داده متداول، براي حالتي كه طبقهبندهاي پايه از نوع شبكههاي عصبي باشند، نشان ميدهند كه روش پيشنهادي ما باعث افزايش كارايي سيستم طبقهبندي مركب نسبت به روشهاي مشابه آن ميشود.
The combination of multiple classifiers is shown to be suitable for improving the performance of pattern recognition systems. Combining multiple classifiers is only effective if the individual classifiers are accurate and diverse. The methods have been proposed for diversity creation can be classified into implicit and explicit methods. In this paper, we propose a new explicit method for diversity creation. Our method adds a new penalty term in learning algorithm of neural network ensembles. This term for each network is the product of its error and the sum of other networks errors. Experimental results on different data sets show that proposed method outperforms the independent training and the negative correlation learning methods.
[1] V. Gunes and M. Menard, "Combination, cooperation and selection of classifiers: a state of the art," Int. J. of Pattern Recognition and Artificial Intelligence, vol. 17, no. 8, pp. 1303-1324, Dec. 2003.
[2] G. Rogova, "Combining the results of several neural network classifiers," Neural Networks, vol. 7, no. 5, pp. 777-781, 1994.
[3] L. Hansen and P. Salamon, "Neural network ensembles," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 12, no. 10, pp. 993-1001, Oct. 1990.
[4] A. Krogh and J. Vedelsby, "Neural network ensembles, cross validation, and active learning," in Proc. Advances in Neural Information Processing Systems, vol. 7, pp. 231-238, 1995.
[5] S. Hashem, B. Schmeiser, and Y. Yih, "Optimal linear combinations of neural networks: an overview," in IEEE Int. Conf. on Neural Networks, vol. 3, pp. 1507-1512, 1994.
[6] R. Maclin and J. Shavlik, "Combining the predictions of multiple classifiers: using competitive learning to initialize neural networks," in Proc. of 14th Int. Joint Conf. on Artificial Intelligence, pp. 524- 530, Montreal, Canada, Aug. 1995.
[7] L. I. Kuncheva, M. Skurichina, and R. P. W. Duin, "An experimental study on diversity for bagging and boosting with linear classifiers," Information Fusion, vol. 3, no. 4, pp. 245-258, Dec. 2002.
[8] Y. Liu and X. Yao, "Ensemble learning via negative correlation," Neural Networks, vol. 12, no. 10, pp. 1399-1404, Dec. 1999.
[9] E. Bauer and R. Kohavi, "An empirical comparison of voting classification algorithms: bagging, boosting, and variants," Machine Learning, vol. 36, no. 1-2, pp. 105-142, Jul./Aug. 1999.
[10] W. Wang, P. Jones, and D. Partridge, "Diversity between neural networks and decision trees for building multiple classifier systems," in Proc. Int. Workshop on Multiple Classifier Systems, Lecture Notes In Computer Science, vol. 1857, pp. 240-249, Calgiari, Italy, 2000.
[11] R. P. W. Duin and D. M. J. Tax, "Experiments with classifier combining rules," in Proc. Int. Workshop on Multiple Classifier Systems, Lecture Notes In Computer Science, vol. 1857, pp. 16-29, Calgiari, Italy, 2000.
[12] M. Skurichina and R. P. W. Duin, "Bagging, boosting, and the random subspace method for linear classifiers," Pattern Analysis and Applications, vol. 5, no. 2, pp. 121-135, 2002.
[13] L. Breiman, "Bagging predictors," Machine Learning, vol. 24, no. 2, pp.123-140, 1996.
[14] Y. Raviv and N. Intrator, "Bootstrapping with noise: an effective regularization technique," Connection Science, vol. 8, no. 3-4, pp. 355-372, Dec. 1996.
[15] A. Sharkey, N. Sharkey, and G. Chandroth, "Diverse neural net solutions to a fault diagnosis problem," Neural Computing and Applications, vol. 4, no. 4, pp. 218-227, 1996.
[16] Y. Freund and R. E. Schapire, "Experiments with a new boosting algorithm," in Proc. of the 13th Int. Conf. on Machine Learning, pp. 148-156, 1996.
[17] Y. Liu and X. Yao, "Ensemble learning via negative correlation," Neural Networks, vol. 12, no. 10, pp. 1399-1404, 1999.
[18] س. ح. نبوي كريزي و ا. كبير، "تركيب طبقهبندها: ايجاد گوناگوني و قواعد تركيب"، مجله علوم و مهندسي كامپيوتر، مجلد3، شماره 3 (الف)، صص 107-95، پاييز 1384.
[19] N. Ueda and R. Nakano, "Statistical analysis of the generalization error of ensemble estimators," in Int. Conf. on Neural Networks, ICNN96, pp. 90-95, 1996.
[20] K. Tumer and J. Ghosh, "Analysis of decision boundaries in linearly combined neural classifiers," Pattern Recognition, vol. 29, no. 2, pp. 341-348, 1996.
[21] G. Brown, Diversity in Neural Network Ensembles, Ph.D. Thesis, University of Birmingham, Sep. 2003.
[22] B. E. Rosen, "Ensemble learning using decorrelated neural networks," Connection Science, vol. 8, no. 3-4, pp. 373-384, Dec. 1996.
[23] Y. Liu, Negative Correlation Learning and Evolutionary Neural Network Ensembles, Ph.D. Thesis, University of New South Wales, 1998.
[24] X. Yao, M. Fischer, and G. Brown, "Neural network ensembles and their application to traffic flow prediction in telecommunications networks," in Proc. of Int. Joint Conf. on Neural Networks, pp. 693-698, 2001.
[25] G. Brown and X. Yao, "On the effectiveness of negative correlation learning," in Proc. First UK Workshop on Computational Intelligence, pp. 57-62, Edinburgh, Scotland, Sep. 2001.
[26] www.dice.ucl.ac.be/neural-nets/Research/Projects/ELENA/database and www.ics.uci.edu/~mlearn.
[27] G. I. Webb, "Multi boosting: a technique for combining boosting and wagging," Machine Learning, vol. 40, no. 2, pp. 159-197, Aug. 2000.