Investigating the Effect of Hardware Parameters Adjustments on Energy Consumption in Thin Matrix Multiplication Algorithm on GPUs
Subject Areas : Generalmina ashouri 1 , Farshad Khunjush 2
1 -
2 -
Keywords: Multiplication of thin matrices, energy consumption, efficiency, thin storage molds, GPU,
Abstract :
Multiplication of thin algorithmic matrices is a simple but very important part of linear and scientific algebra programs in mathematics and physics, and due to its parallel nature, GPUs are one of the most suitable and important options. To select its executive platform. In recent years, due to the emphasis of researchers to consider energy consumption as one of the main design goals along with efficiency, very little effort has been made to improve the energy consumption of this algorithm on the GPU. In this article, this issue is addressed from the perspective of energy efficiency in efficiency obtained. Utilizing the configuration capability introduced in modern GPUs, by statistically examining the behavior of this algorithm when using different thin matrix storage formats and different hardware settings for more than 200 matrices Slim example, the best configuration settings for the thin matrix multiplication algorithm with different storage formats on the GPU are obtained. This configuration for each storage format is selected to give the best configuration in all samples tested.
1. J. Im and K. Yelick, “Optimization of Sparse Matrix Kernels for Data Mining,” in Proc. of the Workshop on Text Mining, 2001.
2. L. N. Trefethen and D. Bau, III, Numerical Linear Algebra. Society for Industrial and Applied Mathematics,1997.
3. R. Gilbert, S. Reinhardt, and V. B. Shah, “Highperformance Graph Algorithms from Parallel Sparse Matrices,” in Proc. of the Int’l Workshop on Applied Parallel Computing, 2006.
4. Owens JD, Luebke D, Govindaraju N, Harris M, Krüger J, Lefohn AE, Purcell TJ (2007) “A survey of general-purpose computation on graphics hardware,” In: Computer graphics forum, vol 26. Wiley Online Library, pp 80–113.
5.Bell and M. Garland, “Efficient sparse matrix-vector multiplication on CUDA,” Nvidia Technical Report NVR-2008-004, Nvidia Corporation2008.
6. S. Yan, C. Li, Y. Zhang, and H. Zhou, “yaspmv: Yet another spmv framework on gpus,” in ACM SIGPLAN Notices, 2014, pp. 107-118.
7. J. W. Choi, A. Singh, and R. W. Vuduc, “Model-driven autotuning of sparse matrix-vector multiply on GPUs,” in ACM Sigplan Notices, 2010, pp. 115-126.
8. Bolz, Ian Farmer, Eitan Grinspun, and Peter Schrooder, “Sparse matrix solvers on the GPU: Conjugate gradients and multigrid,” ACM Trans. Graph., 22(3):917-924, 2003.
9. R. Gilbert, S. Reinhardt, and V. B. Shah, “Highperformance Graph Algorithms from Parallel Sparse Matrices,” in Proc. of the Int’l Workshop on Applied Parallel Computing, 2006.
10. NVIDIA, “NVIDIA Corporation. CUDA Toolkit Reference Manual, 8.0 edition. Available on line at: http://developer.nvidia.com/cuda-toolkit-80.
11. Benatia, W Ji, Y Wang, F Shi, “Energy evaluation of Sparse Matrix-Vector Multiplication on GPU “ in Green and Sustainable Computing Conference ,2016, p. 1-6. N.
12. Bell and M. Garland, “Implementing sparse matrix-vector multiplication throughput-oriented processors,” In Proc. of Int'l Conf. on High Performance Computing Networking, Storage and Analysis, SC '09, pages 18:1-18:11. ACM, 2009.
13. S. Mullen, M. M. Wolf, and A. Klein, “Pakck: Performance and power analysis of key computational kernels on cpus and gpus,” in High Performance Extreme Computing Conference (HPEC), 2013 IEEE, 2013, pp. 1-6.
14. H. Anzt, S. Tomov, and J. Dongarra, “Energy efficiency and performance frontiers for sparse computations on GPU supercomputers,” in Proceedings of the sixth international workshop on programming models and applications for multicores and manycores, 2015, pp. 1-10.
15. Burtscher, I. Zecena, and Z. Zong, “Measuring GPU power with the K20 built-in sensor,” in Proceedings of Workshop on General Purpose Processing Using GPUs, 2014, p. 28.
16. S. Song, C. Su, B. Rountree, and K. W. Cameron, “A simplified and accurate model of power-performance efficiency on emergent gpu architectures,” in Parallel & Distributed Processing (IPDPS), 2013 IEEE 27th International Symposium on, 2013, pp. 673-686.
17. Zardoshti, P., Khunjush, F. & Sarbazi-Azad, H, “Adaptive sparse matrix representation for efficient matrix–vector multiplication,” Journal of Supercomputing (2016) 72: 3366. https://doi.org/10.1007/s11227-015-1571-0.
18. NVIDIA, “NVIDIA Corporation (2014) Tuning CUDA applications for Kepler,” Technical report, August 2014. http://docs.nvidia.com/cuda/pdf/Kepler_Tuning_Guide.pdf