يك روش دومرحلهاي براي تركيب طبقهبندها
الموضوعات :سیدحسن نبوی کریزی 1 , احساناله کبیر 2
1 - آموزشكده فني حرفهاي شهيد منتظري مشهد
2 - دانشگاه تربیت مدرس
الکلمات المفتاحية: ايجاد گوناگونياختلاط خبرههاگروه ذراتتركيب طبقهبندهاتركيب خطيبهينهسازي,
ملخص المقالة :
يادگيري دستهجمعي، يک رويکرد مؤثر در يادگيري ماشيني است كه در آن با تركيب نتايج چند طبقهبند سعي ميشود تقريب بهتري از يك طبقهبند بهينه فراهم شود. براي آنكه تركيب نتايج طبقهبندها مفيد واقع شود بايد طبقهبندهاي پايه ضمن برخورداري از كارآيي قابل قبول، داراي خطاهاي متفاوتي بوده و قاعده مناسبي براي تركيب نتايج آنها به كار گرفته شود. در اين مقاله يك روش دومرحلهاي براي تركيب نتايج طبقهبندها پيشنهاد ميشود كه در مرحله اول آن، با روش اختلاط خبرهها يك مجموعه طبقهبند با خطاهاي متفاوت ايجاد ميشود و در مرحله دوم با استفاده از الگوريتم بهينهسازي گروه ذرات، وزنهاي بهينه براي تركيب خطي نظرات آنها پيدا ميشوند. نتايج آزمايشهاي ما بر روي چند مجموعه داده متداول، نشان ميدهند كه روش پيشنهادي ما باعث افزايش كارآيي سيستم طبقهبندي مركب نسبت به روشهاي يادگيري مستقل و روش اختلاط خبرهها ميشود.
[1] F. Alimoglu and E. Alpaydin, "Combining multiple representations for pen - based handwritten digit recognition," ELEKTRIC: Turkish J.of Electrical Engineering and Computer Sciences, vol. 9, no. 1,pp. 1-12, 2001.
[2] V. Gunes and M. Menard, "Combination, cooperation and selection of classifiers: a state of the art," IJPRAI, vol. 17, no. 8, pp. 1303- 1324, 2003.
[3] G. Giacinto, F. Roli, and L. Didaci, "Fusion of multiple classifier for intrusion detection in computer networks," Pattern Recognition Letters, vol. 24, no. 12, pp. 1795-1803, Aug. 2003.
[4] Y. Lu and C. L. Tan, "Combination of multiple classifiers using probabilistic dictionary and its application to postcode recognition," Pattern Recognition, vol. 35, no. 12, pp. 2823-2832, Dec. 2002.
[5] S. Gunter, Multiple Classifier Systems in Offline Cursive Handwriting Recognition, Ph.D. Tthesis, 2004.
[6] A. Jain, K. Nandakumar, and A. Ross, "Score normalization in multimodal biometric systems," Pattern Recognition, vol. 38, no. 12, pp. 2270-2285, Dec. 2005.
[7] K. Chen and L. Wang, "Methods of combining multiple classifiers with different features and their applications to text-independent speaker identification," IJPRAI, vol. 11, no. 3, pp. 417-445, 1997.
[8] G. Rogova, "Combining the results of several neural network classifiers," Neural Networks, vol. 7, pp. 777-781, May 1994.
[9] L. Hansen and P. Salamon, "Neural network ensembles," IEEE Trans. on Pattern Analysis and Machine Intelligence, vol. 12, no. 10, pp. 993-1001, Oct. 1990.
[10] A. Krogh and J. Vedelsby, "Neural network ensembles, cross validation, and active learning," in Advances in Neural Information Processing Systems, vol. 7, pp. 231-238, 1995.
[11] S. Hashem, B. Schmeiser, and Y. Yih, "Optimal linear combinations of neural networks: an overview," in Proc. IEEE Int. Conf. on Neural Networks, vol. 3, pp. 1507-1512, Orlando, Florida, 27 Jun.-2 Jul. 1994.
[12] R. Maclin and J. Shavlik, "Combining the predictions of multiple classifiers: using competitive learning to initialize neural networks," in Proc. 14th Int. Joint Conf. on Artificial Intelligence, pp. 524-530,Montreal, Canada, 1995.
[13] L. I. Kuncheva, M. Skurichina, and R. P. W. Duin, "An experimental study on diversity for bagging and boosting with linear classifiers ,"Information Fusion, vol. 3, no. 4, pp. 245-258, Dec. 2002.
[14] Y. Liu and X. Yao, "Ensemble learning via negative correlation," Neural Networks, vol. 12, no. 10, pp. 1399-1404, Dec. 1999.
[15] E. Bauer and R. Kohavi, "An empirical comparison of voting classification algorithms: bagging, boosting, and variants," Machine Learning, vol. 36, no. 1-2, pp. 105-142, Jul./Aug. 1999.
[16] W. Wang, P. Jones, and D. Partridge, "Diversity between neural networks and decision trees for building multiple classifier systems," Lecture Notes in Computer Science, vol. 1857, pp. 240-249, 2000.
[17] R. P. W. Duin and D. M. J. Tax, "Experiments with classifier combining rules," in Proc. Int. Workshop on Multiple Classifier Systems, LNCS 1857, vol. ???, pp. 16-29, ???. 2000.
[18] L. Breiman, "Bagging predictors," Machine Learning, vol. 24, no. 2, pp. 123-140, Aug. 1996.
[19] Y. Raviv and N. Intrator, "Bootstrapping with noise: an effective regularization technique," Connection Science, vol. 8, pp. 355-372, 1996.
[20] Y. Freund and R. E. Schapire, "Experiments with a new boosting algorithm," in Proc. 13th Int. Conf. on Machine Learning, pp. 148- 156, 1996.
[21] H. Drucker, C. Cortes, L. D. Jackel, Y. LeCun, and V. Vapnik, "Boosting and other ensemble methods," Neural Computation, vol. 6, pp. 1289-1301, Nov. 1994.
[22 ] س. ح. نبوي كريزي و ا. كبير، "تركيب طبقه بندها: ايجاد گوناگوني و قواعد تركيب،" مجله علوم و مهندسي كامپيوتر، نشريه علمي پژوهشي انجمن كامپيوتر ايران، جلد3 ، شماره3، صص95-107،پاييز 1384 .
[23] N. Ueda, "Optimal linear combination of neural networks for improving classification performance," IEEE Trans. on Pattern Anal. Mach. Intell., vol. 22, no. 2, pp. 207-215, Feb. 2000.
[24] C. L. Liu, "Classifier combination based on confidence transformation," Pattern Recognition, vol. 38, no. 1, pp. 11-28, Jan. 2005.
[25] M. Jordan and R. Jacobs, Modular and Hierarchical Learning Systems. The Handbook of Brain Theory and Neural Networks, MIT Press, Cambridge, MA, 1995.
[26] J. Kennedy and R. C. Eberhart, "Particle swarm optimization," in Proc. of IEEE Int. Conf. on Neural Networks (ICNN), vol. 4, pp. 1942-1948, 1995.
[27 ]س. ح. ظهيري، "بررسي تئوريك عملكرد طبقه بندي كننده هاي گروه ذرات،" مجموعه مقالات يازدهمين كنفرانس بين المللي كامپيوتر انجمن كامپيوتر ايران،. جلد اول، صص10-3، بهمن 1384 .
[28] I. C. Trelea, "The particle swarm optimization algorithm :convergence analysis and parameter selection," Information Processing Letters, vol. 85, no. 6, pp. 317-325, Mar. 2003.
[29 ]م. رستمي شهر بابكي و ح. نظام آبادي پور، "انتخاب ويژگي در طبق هبندي معناييمجموعه مقالات يازدهمين كنفرانس "،PSO تصاوير با استفاده از الگوريتم ، بي نالمللي كامپيوتر انجمن كامپيوتر ايران، جلد اول، صص. 269-276. بهمن 1384.
[30] J. Kennedy and R. C. Eberhart, Swarm Intelligence, Morgan Kaufmann Publishers, 2001.
[31] Y. Liu, X. Yao, Q. Zhao, and T. Higuchi, "An experimental comparison of ensemble learning methods on decision boundaries," in Proc. IEEE International Joint Conf. on Neural Networks, IJCNN, vol. 1, pp. 221-226, Honolulu, US, May 2002.
[32] http://www.ics.uci.edu/~mlearn/databases/ and www.dice.ucl.ac.be/neural-nets/research/projects/ELENA/database/.