طبقهبندی بااحتیاط دادههای ابرمستطیلی، ابردایروی و ابربیضوی با حداکثر حاشیه متقارن نسبت به لبه دادهها
محورهای موضوعی : مهندسی برق و کامپیوتریحیی فرقانی 1 , میثاق سادات حجازی 2 , هادی صدوقی یزدی 3
1 - Islamic Azad University, Mashhad branch
2 - دانشگاه فردوسی مشهد
3 - دانشگاه فردوسی مشهد
کلید واژه: داده توأم با عدم قطعیتزمان آزمونزمان آموزشطبقهبند مقاوم بااحتیاط,
چکیده مقاله :
مدل طبقهبندی مقاوم، یک مدل غیر استاندارد برای یادگیری طبقهبند بر اساس یک مجموعه داده توأم با عدم قطعیت است. به هر مدل طبقهبندی که در مجموعه جوابهای ممکن آن، جواب بیمعنی وجود داشته باشد، مدل بیاحتیاط گفته میشود. جواب بهینه یک مدل طبقهبندی مقاوم بیاحتیاط به ازای یک مجموعه داده آموزشی، ممکن است ابرصفحه نباشد که در این صورت امکان طبقهبندی دادهها در مرحله آزمون میسر نخواهد بود. در این مقاله مدلهای طبقهبند مقاوم بیاحتیاط معرفی و مشکلات آنها بررسی شده و سپس با تغییر تابع ضرر در طبقهبند مقاوم، مدل طبقهبندی مقاوم بااحتیاط برای ممانعت از بیاحتیاطی معرفی میشود. مدل بااحتیاط پیشنهادی، استاندارد شده و راهکارهایی برای کاهش زمان آموزش و زمان آزمون آن ارائه میگردد. در آزمایشات از مدل طبقهبند مقاوم بااحتیاط پیشنهادی در مقایسه با چند مدل مقاوم بیاحتیاط، برای طبقهبندی مجموعه دادههای آموزشی ناقص و مجموعه دادههای آموزشی قطعی کامل استفاده شد. نتایج به دست آمده نشان داد که در مجموعه دادههای ناقص، مدل پیشنهادی زمان آموزش و زمان آزمون و نرخ خطای کمتری نسبت به مدلهای بیاحتیاط داشت. همچنین در مجموعه دادههای کامل قطعی، مدل پیشنهادی زمان آموزش و زمان آزمون کمتری نسبت به مدلهای بیاحتیاط داشت. نتایج به دست آمده کارایی افزودن احتیاط به طبقهبند مقاوم را تأیید نمود.
A robust classification model is a non-standard model for classifying learning based on an uncertain data set. An incautious model is said to have any meaningless answer to any classification model in its possible set of possible solutions. The optimal answer for a cautious robust classification model for a training data set may not be the hyper-page, in which case it will not be possible to classify the data at the test stage. In this paper, incautious robust classification models are introduced and their problems are investigated and then by changing the loss function of a robust classifier, a cautious robust classification model is presented to prevent incautious. The proposed cautious model is standardized and solutions are provided to reduce the training time and test time. In the experiments, the proposed model was compared with some incautious robust models to classification incomplete training data set, and complete definitive training data set. The results showed that in the incomplete data set, the proposed model had less training time and error rate than incautious models. Also, in the complete definitive data set, the proposed model training time and test time were less than incautious models. The results approved that adding caution to a robust classifier is efficient.
[1] M. S. Lobo, L. Vandenberghe, S. Boyd, and H. Lebert, "Applications of second-order cone programming," Linear Algebra and Its Applications, vol. 284, no. 1-3, pp. 193-228, Nov. 1998.
[2] M. S. Bazaraa, H. D. Sherali, and C. M. Shetty, Nonlinear Programming: Theory and Algorithms, John Wiley & Sons, 2013.
[3] G. Chechik, G. Heitz, G. Elidan, P. Abbeel, and D. Koller, "Max-margin classification of data with absent features," J. of Machine Learning Research, vol. 9, no. pp. 1-21, Jan. 2008.
[4] S. Kambar, Generating Synthetic Data by Morphing Transformation for Handwritten Numeral Recognition (with V-SVM), Concordia University, 2005.
[5] P. Simard, D. Steinkraus, and J. Platt, "Best practices for convolutional neural networks applied to visual document analysis," in Proc. 7th Int. Conf. on Document Analysis and Recognition, pp. 958 963, Edinburgh, UK, 6-6 Aug. 2003.
[6] G. Loosli, A. S. Canu, and L. Bottou, Large-Scale Kernel Machines, Massachusetts: MIT Press, 2007.
[7] G. M. Fung, O. L. Mangasarian, and J. W. Shavlik, "Knowledge-based support vector machine classifiers," in Proc. of the 15th Int. Conf. on Neural Information Processing Systems, NIPS'02, pp. 537-544, Vancouver, Canada, 9-14 Dec. 2002.
[8] V. Jeyakumar, J. Ormerod, and R. S. Womersley, "Knowledge-based semidefinite linear programming classifiers," Optimisation Methods Software, vol. 21, no. 5, pp. 693-706, 2006.
[9] E. Carrizosa, J. Gordillo, and F. Plastria, Classification Problems with Imprecise Data through Separating Hyperplanes, MOSI Department, Vrije Universiteit Brussel MOSI/33, 2007.
[10] T. B. Trafalis and R. C. Gilbert, "Robust support vector machines for classification and computational issues," Optimisation Methods Software, vol. 22, no. 1, pp. 187-198, 2007.
[11] T. B. Trafalis and R. C. Gilbert, "Robust classification and regression using support vector machines," European J. of Operational Research, vol. 173, no. 3, pp. 893-909, 16 Sept. 2006.
[12] H. Xu, Robust Decision Making and Its Applications in Machine Learning, McGill University, 2009.
[13] H. Xu, C. Caramanis, and S. Mannor, "Robustness and regularization of support vector machines," J. of Machine Learning Research, vol. 10, pp. 1485-1510, Jul. 2009.
[14] Y. Forghani and H. Sadoghi, "Comment on robustness and regularization of support vector machines by H. Xu et al. (J. of machine learning research, vol. 10, pp. 1485-1510, 2009)," The J. of Machine Learning Research, vol. 14, no. 1, pp. 3493-3494, Nov. 2013.
[15] Y. Shi, Y. Tian, G. Kou, Y. Peng, and J. Li, Optimization Based Data Mining: Theory and Applications, Springer Science & Business Media, 2011.
[16] H. Xu, C. Caramanis, S. Manor, and S. Yun, "Risk sensitive robust support vector machines," in Proc. of the 48th IEEE Conf. on Decision and Control, held jointly with the 2009 28th Chinese Control Conf., CDC/CCC'09, pp. 4655-4661, Shanghai, China, 15-18 Dec. 2009.
[17] H. Xu, S. Mannor, and C. Caramanis, "Robustness, risk, and regularization in support vector machines," J. of Machine Learning Research, vol. 10, pp. 1485-1510, Dec. 2009.
[18] H. Xu and S. Mannor, "Robustness and generalization," Machine Learning, vol. 86, no. 3, pp. 391-423, Mar. 2012.
[19] W. An and Y. Sun, "PKSVR: a novel prior knowledge-based support vector regression," Asian J. of Information Technology, vol. 4, no. 11, pp. 978-980, 2005.
[20] B. Liu, Y. Xiao, L. Cao, and P. S. Yu, "One-class-based uncertain data stream learning," in Proc. of the 11th SIAM Int. Conf. on Data Mining, pp. 992-1003, Mesa, AZ, USA, 28-30 Apr. 2011.
[21] J. B. Pothin and C. Richard, "Incorporating prior information into support vector machines in the form of ellipsoidal knowledge sets," in Proc. 14 th IEEE European Signal Processing Conf., EUSIPCO'06, 4 pp., Florence, Italy, 4-8 Sept. 2006.
[22] Q. V. Le, A. J. Smola, and T. Gartner, "Simpler knowledge-based support vector machines," in Proc. of the 23rd Int. Conf. on Machine Learning, pp. 521-528, Pittsburgh, PA, USA, 25-29 Jun. 2006.
[23] C. H. Teo, et al., Convex learning with invariances, in Advances in Neural Information Processing Systems, 2008.
[24] A. B. Ji, J. H. Pang, and H. J. Qiu, "Support vector machine for classification based on fuzzy training data," Expert Systems with Applications, vol. 37, no. 4, pp. 3495-3498, Apr. 2010.
[25] L. Yang and H. Dong, "Support vector machine with truncated pinball loss and its application in pattern recognition," Chemometrics Intelligent Laboratory Systems, vol. 177, pp. 89-99, 15 Jun. 2018.
[26] B. B. Gao and J. J. Wang, A Fast and Robust TSVM for Pattern Classification, arXiv preprint arXiv:.05406, 2017.
[27] J. Chen, T. Takiguchi, and Y. Ariki, "A robust SVM classification framework using PSM for multi-class recognition," EURASIP J. on Image Video Processing, vol. 2015, no. 7, 12 pp., Dec. 2015.
[28] S. Katsumata and A. Takeda, "Robust cost sensitive support vector machine," in Proc. of the 18th Int. Conf. on Artificial Intelligence and Statistics (AISTATS), JMLR: W&CP, vol. 38, no. 18, pp.434-443, 2015.
[29] C. Tzelepis, V. Mezaris, and I. Patras, "Linear maximum margin classifier for learning from uncertain data," IEEE Trans. on Pattern Analysis Machine Intelligence, vol. 40, no. 12, pp. 2948 -2962, Dec. 2017.
[30] Y. Forghani and H. S. Yazdi, "Fuzzy min-max neural network for learning a classifier with symmetric margin," Neural Processing Letters, vol. 42, no. 2, pp. 317-353, Oct. 2015.
[31] S. Jing and L. Yang, "A robust extreme learning machine framework for uncertain data classification," The J. of Supercomputing, pp. 1-27, 2018. https://doi.org/10.1007/s11227-018-2430-6
[32] E. Adeli, et al., "Semi-supervised discriminative classification robust to sample-outliers and feature-noises," IEEE Trans. on Pattern Analysis Machine Intelligence, vol. 41, no. 2, pp. 515-522, Feb. 2018.
[33] W. Y. Cheng and C. F. Juang, "An incremental support vector machine-trained TS-type fuzzy system for online classification problems," Fuzzy Sets Systems, vol. 163, no. 1, pp. 24-44, 16 Jan. 2011.
[34] MOSEK, Available: https://www.mosek.com/.
[35] D. C. Montgomery, Design and Analysis of Experiments, John Wiley & Sons, 2017.
[36] J. Demsar, "Statistical comparisons of classifiers over multiple data sets," J. of Machine Learning Research, vol. 7, no. 1, pp. 1-30, Jan. 2006.
[37] O. T. Yildiz and E. Alpaydin, "Ordering and finding the best of K>2 supervised learning algorithms," IEEE Trans. on Pattern Analysis Machine Intelligence, vol. 28, no. 3, pp. 392-402, Mar. 2006.
[38] B. M. Marlin, Missing Data Problems in Machine Learning, Department of Computer Science, University of Toronto, 2008.
[39] A. P. Dempster, N. M. Laird, and D. B. Rubin, "Maximum likelihood from incomplete data via the EM algorithm," J. of the Royal Statistical Society. Series B, vol. 39, no. 1, pp. 1-38, 1997.
[40] D. B. Rubin, "Multiple imputation after 18+ years," J. of the American Statistical Association, vol. 91, no. 434, pp. 473-489, Jun. 1996.
[41] P. K. Shivaswamy, C. Bhattacharyya, and A. J. Smola, "Second order cone programming approaches for handling missing and uncertain data," J. of Machine Learning Research, vol. 7, pp. 1283-1314, Jul. 2006.
[42] S. Garcia, A. Fernandez, J. Luengo, and F Herrera., "Advanced nonparametric tests for multiple comparisons in the design of experiments in computational intelligence and data mining: experimental analysis of power," Information Sciences, vol. 180, no. 10, pp. 2044-2064, 15 May 2010.
[43] D. J. Sheskin, Handbook of Parametric and Nonparametric Statistical Procedures, Florida, US: Chapman & Hall, 2007.
[44] H. Finner, "On a monotonicity problem in step-down multiple test procedures," J. of the American Statistical Association, vol. 88, no. 423, pp. 920-923, Sept. 1993.
[45] Y. Forghani, S. Effati, H. Sadoughi Yazdi, and R. Sigari Tabrizi, "Support vector data description by using hyper-ellipse instead of hyper-sphere," in Proc. 1st IEEE Int. eConf. on Computer and Knowledge Engineering, ICCKE'11, pp. 22-27, Mashad, Iran, 13-14 Oct. 2011.
[46] L. M. Manevitz, and M. Yousef, "One-class SVMs for document classification," J. of Machine Learning Research, vol. 2, pp. 139-154, Dec. 2001.
[47] C. F. Juang, S. H. Chiu, and S. W. Chang, "A self-organizing TS-type fuzzy network with support vector learning and its application to classification problems," IEEE Trans. on Fuzzy Systems, vol. 15, no. 5, pp. 998-1008, Oct. 2007.
[48] H. Zhang, J. Liu, D. Ma, D. Ma, Z. Wang., "Data-core-based fuzzy min-max neural network for pattern classification," IEEE Trans. on Neural Networks, vol. 22, no. 12, pp. 2339-2352, Dec. 2011.