راهکاری توزیعشده برای خوشهبندی کلاندادههای ترکیبی
محورهای موضوعی : مهندسی برق و کامپیوترمحسن محمودی 1 , نگین دانشپور 2
1 - دانشگاه تربیت دبیر شهید رجایی
2 - دانشگاه شهید رجایی
کلید واژه: اصلاح دادههاپردازش توزیعشدهخوشهبندیکلاندادهدادههای ترکیبی,
چکیده مقاله :
با توجه به سرعت روزافزون تولید اطلاعات و همچنین وجود نیازمندی تبدیل اطلاعات به دانش، نیاز به الگوریتمهای دادهکاوی به شدت لمس میشود. خوشهبندی یکی از تکنیکهای دادهکاوی است و توسعه آن سبب پیشرفت در جهت فهم بیشتر محیط پیرامون میشود. در این مقاله، راهکاری پویا و مقیاسپذیر برای خوشهبندی دادههای ترکیبی با ابعاد کلان به همراه نقصان در دادهها ارائه گردیده است. به علت هدفگذاری حوزه کلاندادهها، راهکار پیشنهادی به صورت توزیعشده، دادهها را پردازش میکند. در این راهکار از ادغام معیارهای فاصله رایج با مفهوم نزدیکترین همسایگی مشترک و همچنین به کارگیری نوعی از کدگذاری هندسی بهره برده شده است. همچنین روشی برای ترمیم دادههای از دست رفته در مجموعه داده نیز در آن در نظر گرفته شده است. با بهرهگیری از تکنیکهای موازیسازی و توزیع پردازش فیمابین گرههای متعدد میتوان به مقیاسپذیری و تسریع دست یافت. الگوریتم پیشنهادی نیزاز این روشها به جهت دستیابی به این مهم بهره میبرد. ارزیابی این راهکار بر اساس معیارهای سرعت، دقت و حافظه مصرفی با مقایسه با دیگر موارد انجام میشود.
Due to the high-speed of information generation and the need for data-knowledge conversion, there is an increasing need for data mining algorithms. Clustering is one of the data mining techniques, and its development leads to further understanding of the surrounding environments. In this paper, a dynamic and scalable solution for clustering mixed big data with a lack of data is presented. In this solution, the integration of common distance metrics with the concept of the closest neighborhood, as well as a kind of geometric coding are used. There is also a way to recover missing data in the dataset. By utilizing parallelization and distribution techniques, multiple nodes can be scalable and accelerated. The evaluation of this solution is based on speed, precision, and memory usage criteria compared to other ones.
[1] C. C. Hsu and Y. P. Huang, "Incremental clustering of mixed data based on distance hierarchy," Expert Syst. Appl., vol. 35, no. 3, pp. 1177-1185, Oct. 2008.
[2] J. Liang, X. Zhao, D. Li, F. Cao, and C. Dang, "Determining the number of clusters using information entropy for mixed data," Pattern Recognit., vol. 45, no. 6, pp. 2251-2265, Jun. 2012.
[3] A. Foss, M. Markatou, B. Ray, and A. Heching, "A semiparametric method for clustering mixed data," Mach. Learn., vol. 105, no. 3, pp. 419-458, Dec. 2016.
[4] F. Cao, et al., "An algorithm for clustering categorical data with set-valued features," IEEE Trans. Neural Networks Learn. Syst., vol. 29, no. 10, pp. 4593-4606, Oct. 2017.
[5] F. Noorbehbahani, S. R. Mousavi, and A. Mirzaei, "An incremental mixed data clustering method using a new distance measure," Soft Comput., vol. 19, no. 3, pp. 731-743, Mar. 2015.
[6] B. Bahmani, B. Moseley, A. Vattani, R. Kumar, and S. Vassilvitskii, "Scalable K-means ++," in Proc. VLDB Endow., vol. 5, pp. 622-633, Mar. 2012.
[7] Y. Kim, K. Shim, M. S. Kim, and J. Sup Lee, "DBCURE-MR: an efficient density-based clustering algorithm for large data using MapReduce," Inf. Syst., vol. 42, pp. 15-35, Jun. 2014.
[8] E. Gungor and A. Ozmen, "Distance and density based clustering algorithm using Gaussian kernel," Expert Syst. Appl., vol. 69, pp. 10-20, 1 Mar. 2017.
[9] Z. He, X. Xu, and S. Deng, "Scalable algorithms for clustering large datasets with mixed type attributes," Int. J. Intell. Syst., vol. 20, no. 10, pp. 1077-1089, Oct. 2005.
[10] X. Cui, P. Zhu, X. Yang, K. Li, and C. Ji, "Optimized big data K-means clustering using MapReduce," J. Supercomput., vol. 70, no. 3, pp. 1249-1259, Dec. 2014.
[11] A. Hadian and S. Shahrivari, "High performance parallel k-means clustering for disk-resident datasets on multi-core CPUs," J. Supercomput., vol. 69, no. 2, pp. 845-863, Aug. 2014.
[12] T. Kanungo, D. M. Mount, N. S. Netanyahu, C. D. Piatko, R. Silverman, and A. Y. Wu, "An efficient k-means clustering algorithm: analysis and implementation," IEEE Trans. Pattern Anal. Mach. Intell., vol. 24, no. 7, pp. 881-892, Jul. 2002.
[13] S. A. Ludwig, "MapReduce-based fuzzy c-means clustering algorithm: implementation and scalability," Int. J. Mach. Learn. Cybern., vol. 6, no. 6, pp. 923-934, Dec. 2015.
[14] S. Fosso Wamba, S. Akter, A. Edwards, G. Chopin, and D. Gnanzou, "How 'big data' can make big impact: findings from a systematic review and a longitudinal case study," Int. J. Prod. Econ., vol. 165, pp. 234-246, Jul. 2015.
[15] J. Han, J. Pei, and M. Kamber, Data Mining: Concepts and Techniques, Elsevier Science, 2011.
[16] T. Wangchamhan, S. Chiewchanwattana, and K. Sunat, "Efficient algorithms based on the k-means and chaotic league championship algorithm for numeric, categorical, and mixed-type data clustering," Expert Syst. Appl., vol. 90, pp. 146-167, 30 Dec. 2017.
[17] B. Y. J. Dean and S. Ghemawat, "MapReduce: a flexible data processing tool," Commun. ACM, vol. 53, no. 1, pp. 72-77, Jan. 2010.
[18] T. White, Hadoop: The Definitive Guide, vol. 54, 2012.
[19] W. Zhao, H. Ma, and Q. He, "Parallel K-means clustering based on MapReduce," in Lecture Notes in Computer Science, vol. LNCS5931, pp. 674-679, 2009.
[20] F. Barcelo-Rico and J. L. Diez, "Geometrical codification for clustering mixed categorical and numerical databases," J. Intell. Inf. Syst., vol. 39, no. 1, pp. 167-185, Dec. 2011.
[21] H. S. M. Coxeter, P. Du Val, H. T. Flather, and J. F. Petrie, The Fifty-Nine Icosahedra, New York, NY: Springer New York, 1982.
[22] A. Likas, N. Vlassis, and J. J. Verbeek, "The global k-means clustering algorithm," Pattern Recognit., vol. 36, no. 2, pp. 451-461, Feb. 2003.
[23] Y. He, et al., "MR-DBSCAN: an efficient parallel density-based clustering algorithm using MapReduce," in Proc. of the Int. Conf. on Parallel and Distributed Systems, ICPADS’11, pp. 473-480, Tainan, Taiwan, 7-9 Dec. 2011.
[24] M. Ester, H. P. Kriegel, J. Sander, and X. Xu, "A density-based algorithm for discovering clusters in large spatial databases with noise," in Proc. of the 2nd Int. Conf. on Knowledge Discovery and Data Mining, , PAKDD’96, pp. 226-231, Portland, OR, US, 2-4 Aug. 1996.
[25] Z. Huang, "Clustering large data sets with mixed numeric and categorical values," in Proc. 1st Pacific-Asia Conf. Knowl. Discov. Data Mining, PAKDD’97, pp. 21-34, 1997.
[26] M. A. Ben Haj Kacem, C. E. Ben N'Cir, and N. Essoussi, "MapReduce-based k-prototypes clustering method for big data," in Proc. of the IEEE Int. Conf. on Data Science and Advanced Analytics, DSAA’11, pp. 473 - 480, Tainan, Taiwan, 7-9 Dec. 2011.
[27] V. V. Ayuyev, A. Thura, N. N. Hlaing, and M. B. Loginova, "The quick dynamic clustering method for mixed-type data," Autom. Remote Control, vol. 73, no. 12, pp. 2083-2088, Dec. 2012.
[28] C. M. Ayuyev, V.V., Aung, Z.Y., and Thein, The Domain Compensation Method for Incomplete Information in a Database, Tr. Mosk. Gos. Tekhn, pp. 57-64, 2007.
[29] S. Hassani, Mathematical Physics: A Modern Introduction to Its Foundations, 2nd Edition, 2013.
[30] L. Wang, et al., "Cloud computing: a perspective study," New Gener. Comput., vol. 28, no. 2, pp. 137-146, Apr. 2010.
[31] E. Achtert, H. P. Kriegel, and A. Zimek, "ELKI: a software system for evaluation of subspace clustering algorithms," in Ludäscher B., Mamoulis N. (eds) Scientific and Statistical Database Management, SSDBM 2008. Lecture Notes in Computer Science, vol 5069, pp. 580-585, 2008.
[32] UCI Machine Learning Repository, Individual Household Electric Power Consumption Data Set, [Online]. Available: https://archive.ics.uci.edu/ml/datasets/Individual+household+electric+power+consumption. [Accessed: 30-Apr-2018].
[33] R. Andraka, "A survey of CORDIC algorithms for FPGA based computers," in Proc. ACMSIGDA 6th Int. Symp. F. Program. Gate Arrays, FPGA’98, pp. 191-200, Monterey, California, US, 22-25 Feb. 1998.
[34] A. Fahad, et al., "A survey of clustering algorithms for big data: taxonomy and empirical analysis," IEEE Trans. Emerg. Top. Comput., vol. 2, no. 3, pp. 267-279, Sept. 2014.
[35] Z. Liu, Q. Pan, J. Dezert, and A. Martin, "Adaptive imputation of missing values for incomplete pattern classification," Pattern Recognit., vol. 52, pp. 85-95, Apr. 2016.