High Performance Computing via Improvement of Random Forest Algorithm Using Compression and Parallelization Techniques
Subject Areas : electrical and computer engineeringNaeimeh Mohammad Karimi 1 , Mohammad Ghasemzadeh 2 , Mahdi Yazdian Dehkordi 3 , Amin Nezarat 4
1 - Yazd University
2 - عضو هیئت علمی دانشگاه
3 - Yazd University
4 -
Keywords: Machine learningRandom forestHigh Performance ComputingCompressionParallelizationBig Data,
Abstract :
This research seeks to promote one of the widely being used algorithms in machine learning, known as the random forest algorithm. For this purpose, we use compression and parallelization techniques. The main challenge we address in this research is about application of the random forest algorithm in processing and analyzing big data. In such cases, this algorithm does not show the usual and required performance, due to the needed large number of memory access. This research demonstrates how we can achieve the desired goal by using an innovative compression method, along with parallelization techniques. In this regard, the same components of the trees in the random forest are combined and shared. Also, a vectorization-based parallelization approach, along with a shared-memory-based parallelization method, are used in the processing phase. In order to evaluate its performance, we run it on the Kaggle benchmarks, which are being used widely in machine learning competitions. The experimental results show that contribution of the proposed compression method, could reduce 61% of the required processing time; meanwhile, application of the compression along with the named parallelization methods could lead to about 95% of improvement. Overall, this research implies that the proposed solution can provide an effective step toward high performance computing.
[1] ی. صالحی و ن. دانشپور، "يك روش بدون پارامتر مبتني بر نزديكي براي تشخيص دادههاي پرت،" نشریه مهندسی برق و مهندسی کامپیوتر ایران، ب- مهندسی کامپیوتر، سال 17، شماره 1، صص. 24-16، بهار 1398.
[2] A. Majeed, "Improving time complexity and accuracy of the machine learning algorithms through selection of highly weighted top k features from complex datasets," Annals of Data Science, vol. 6, no. 4, pp. 1-23, Dec. 2019.
[3] M. K. Leung, A. Delong, B. Alipanahi, and B. J. Frey, "Machine learning in genomic medicine: a review of computational problems and data sets," Proceedings od the IEEE, vol. 104, no. 1, pp. 176-197, Jan. 2016.
[4] R. C. Edgar, "MUSCLE: a multiple sequence alignment method with reduced time and space complexity," BMC Bioinformatics, vol. 5, no. 1, p. 113, Aug. 2004.
[5] C. W. Hsu and C. J. Lin, "A comparison of methods for multiclass support vector machines," IEEE Trans. on Neural Networks, vol. 13, no. 2, pp. 415-425, Mar. 2002.
[6] E. Rosten and T. Drummond, "Machine learning for high-speed corner detection," in Proc. European Conf. on Computer Vision, vol. 1, pp. 430-443, May 2006.
[7] M. L. Wallace, et al., "Multidimensional sleep and mortality in older adults: a machine-learning comparison with other risk factors," The J. of Gerontology: Series A, vol. 74, no. 12, pp. 1903-1909, Dec. 2019.
[8] E. Gabriel, et al., Open MPI: Goals, Concept, and Design of a Next Generation MPI Implementation, Springer, Berlin, Heidelberg, 2004.
[9] M. Snir, S. Otto, S. Huss-Lederman, J. Dongarra, and D. Walker, MPI--the Complete Reference: The MPI Core, Massachusetts Institute of Technology, 1998.
[10] Y. Zheng, A. Kamil, M. B. Driscoll, H. Shan, and K. Yelick, "UPC++: a PGAS extension for C++," in Proc. IEEE 28th Int. Parallel and Distributed Processing Symp., pp. 1105-1114, Phoenix, AZ, USA, 19-23 May 2014.
[11] M. S. Schlansker and B. R. Rau, "EPIC: explicitly parallel instruction computing," IEEE Computer, vol. 33, no. 2, pp. 37-45, Feb. 2000.
[12] J. Kadomoto, et al., "An area-efficient out-of-order soft-core processor without register renaming," in Proc. Int. Conf. on Field-Programmable Technology, FPT’18, pp. 374-377, Naha, Okinawa, Japan, 10-14 Dec. 2018.
[13] K. B. Theobald, G. R. Gao, and L. Hendren, "Speculative execution and branch prediction on parallel machines," in Proc. of the 7th Int. Conf. on Supercomputing, pp. 77-86, Aug. 1993.
[14] Z. N. Wang, J. Tyacke, P. Tucker, and P. Boehning, "Parallel computation of aeroacoustics of industrially relevant complex-geometry aeroengine jets," Computers & Fluids, vol. 178, pp. 166-178, Nov. 2019.
[15] F. Nielsen, Introduction to HPC with MPI for Data Science, Switzerland: Springer Nature, 2016.
[16] R. Lim, Y. Lee, R. Kim and C. Jaeyoung, "An implementation of matrix–matrix multiplication on the Intel KNL processor with AVX-512," Cluster Computing, vol. 21, no. 4, pp. 1785-1795, Dec. 2018.
[17] D. Lecina, J. F. Gilabert, and V. Guallar, "Adaptive simulations, towards interactive protein-ligand modeling," Scientific Reports, vol. 7, no. 1, p. 8466, Aug. 2017.
[18] M. Baydoun, H. Ghaziri, and M. Al-Husseini, "CPU and GPU parallelized kernel K-means," The J. of Supercomputing, vol. 74, no. 8, pp. 3975-3998, May 2018.
[19] J. Browne, T. Tomita, D. Mhembere, R. Burns, and J. T. Vogelstein, "Forest packing: fast, parallel decision forests," in Proc. of the SIAM Int. Conf. on Data Mining, pp. 46-54, May 2019.
[20] L. Breiman, "Random forests," Machine Learning, vol. 45, no. 1, pp. 5-32, Oct. 2001.
[21] A. Criminisi, J. Shotton, and E. Konukoglu, "Decision forests for classification, regression, density estimation, manifold learning and semi-supervised learning," Foundations and Trends in Computer Graphics and Vision, vol. 7, no. 2-3, pp. 81-227, Feb. 2012.
[22] J. Ali, R. Khan, N. Ahmad, and I. Maqsood, "Random forests and decision trees," International J. of Computer Science Issues, vol. 9, no. 5, pp. 272-278, Sept. 2012.
[23] I. Kamel and C. Faloutsos, "On packing R-trees," in Proc. of the 2nd Int. Conf. on Information and Knowledge Management, pp. 490-499, Dec. 1993.
[24] I. K. Landing, "Guide to Automatic Vectorization with Intel AVX-512 Instructions in Knights Landing Processors," 11 May 2016. [Online]. Available: https://colfaxresearch.com/knl-avx512/.
[25] P. Alonso, R. Cortina, F. J. Matrines-Zaldivar, and J. Ranilla, "Neville elimination on multi-and many-core systems: OpenMP, MPI and CUDA," The J. of Supercomputing, vol. 58, no. 2, pp. 215-225, Nov. 2011.
[26] Allstate Claim Prediction Challenge, Accessed Feb. 13 2018. [Online]. Available: https://www.kaggle.com/c/ClaimPredictionChallenge.
[27] Higgs Boson Machine Learning Challenge, Accessed Feb. 13 2018. [Online]. Available: https://www.kaggle.com/c/higgs-boson.
[28] J. Browne, Forestpacking, 11 Oct 2018. [Online]. Available: https://github.com/jbrowne6/forestpacking.
[29] A. Nezarat, Astek HPC Big Data, [Online]. Available: http://astek.ir/. [Accessed 22 July 2019].