زمانبندی وظایف برنامههای کاربردی اینترنت اشیا در محیط رایانش مه با استفاده از یادگیری تقویتی عمیق
محورهای موضوعی : مهندسی برق و کامپیوترپگاه گازری 1 , دادمهر رهبری 2 , محسن نیک رای 3
1 - دانشگاه قم
2 - دانشگاه قم
3 - دانشگاه قم
کلید واژه: اینترنت اشیاءرایانش مهزمان بندی وظایفیادگیری تقویتی عمیق,
چکیده مقاله :
همزمان با فراگیرشدن تکنولوژی اینترنت اشیا در سالهای اخیر، تعداد دستگاههای هوشمند و به تبع آن حجم دادههای جمعآوریشده توسط آنها به سرعت در حال افزایش است. از سوی دیگر، اغلب برنامههای کاربردی اینترنت اشیا نیازمند تحلیل بلادرنگ دادهها و تأخیر اندک در ارائه خدمات هستند. تحت چنین شرایطی، ارسال دادهها به مراکز داده ابری جهت پردازش، پاسخگوی نیازمندیهای برنامههای کاربردی مذکور نیست و مدل رایانش مه، انتخاب مناسبتری محسوب میگردد. با توجه به آن که منابع پردازشی موجود در مدل رایانش مه دارای محدودیت هستند، استفاده مؤثر از آنها دارای اهمیت ویژهای است.در این پژوهش به مسئله زمانبندی وظایف برنامههای کاربردی اینترنت اشیا در محیط رایانش مه پرداخته شده است. هدف اصلی در این مسئله، کاهش تأخیر ارائه خدمات است که جهت دستیابی به آن، از رویکرد یادگیری تقویتی عمیق استفاده شده است. روش ارائهشده در این مقاله، تلفیقی از الگوریتم Q-Learning، یادگیری عمیق و تکنیکهای بازپخش تجربه و شبکه هدف است. نتایج شبیهسازیها نشان میدهد که الگوریتم DQLTS از لحاظ معیار ASD، ۷۶% بهتر از الگوریتم QLTS و 5/6% بهتر از الگوریتم RS عمل مینماید و نسبت به QLTS زمان همگرایی سریعتری دارد.
With the advent and development of IoT applications in recent years, the number of smart devices and consequently the volume of data collected by them are rapidly increasing. On the other hand, most of the IoT applications require real-time data analysis and low latency in service delivery. Under these circumstances, sending the huge volume of various data to the cloud data centers for processing and analytical purposes is impractical and the fog computing paradigm seems a better choice. Because of limited computational resources in fog nodes, efficient utilization of them is of great importance. In this paper, the scheduling of IoT application tasks in the fog computing paradigm has been considered. The main goal of this study is to reduce the latency of service delivery, in which we have used the deep reinforcement learning approach to meet it. The proposed method of this paper is a combination of the Q-Learning algorithm, deep learning, experience replay, and target network techniques. According to experiment results, The DQLTS algorithm has improved the ASD metric by 76% in comparison to QLTS and 6.5% compared to the RS algorithm. Moreover, it has been reached to faster convergence time than QLTS.
[1] Cisco, Fog Computing and the Internet of Things: Extend the Cloud to Where the Things Are, Cisco White Paper, 2015.
[2] M. Iorga, et al., Fog Computing Conceptual Model, NIST SP-500-325, 2018.
[3] R. Mahmud, R. Kotagiri, and R. Buyya, "Fog computing: a taxonomy, survey and future directions," Internet of Everything, pp. 103-130, Springer, 17 Oct. 2018.
[4] OpenFog Consortium and Others, OpenFog Reference Architecture for Fog Computing, Architecture Working Group, 2017.
[5] D. Cui, Z. Peng, W. Lin, et al., "A reinforcement learning-based mixed job scheduler scheme for grid or iaas cloud," IEEE Trans. on Cloud Computing, vol. 8, no. 4, pp. 1030-1039, Oct.-Dec. 2017.
[6] س. حورعلی، ش. جمالی و ف. حورعلی، "ارائه یک الگوریتم توازن بار نامتمرکز در محیطهای ناهمگن،" نشریه مهندسی برق و مهندسی کامپیوتر ایران، ب- مهندسی کامپیوتر، صص. 154-147، دوره 14، شماره 2، تابستان 1395.
[7] M. Wang, Y. Cui, X. Wang, S. Xiao, and J. Jiang, "Machine learning for networking: workflow, advances and opportunities," IEEE Network, vol. 32, no. 2, pp. 92-99, Nov. 2018.
[8] H. Mao, M. Alizadeh, I. Menache, and S. Kandula, "Resource management with deep reinforcement learning," in Proc. of the 15th ACM Workshop on Hot Topics in Networks, HotNet’16, pp. 50-56, Nov. 2016.
[9] Q. Zhang, M. Lin, L. T. Yang, Z. Chen, and P. Li, "Energy-efficient scheduling for real-time systems based on deep Q-learning model," IEEE Trans. on Sustainable Computing, vol. 4, no. 1, pp. 132-141, Aug. 2017.
[10] N. Liu, et al., "A hierarchical framework of cloud resource allocation and power management using deep reinforcement learning," in Proc. IEEE 37th Int. Conf. on Distributed Computing Systems, ICDCS’17, vol. 1, pp. 372-382, 2017.
[11] Y. Wei, F. R. Yu, M. Song, and Z. Han, "Joint optimization of caching, computing, and radio resources for fog-enabled iot using natural actor-critic deep reinforcement learning," IEEE Internet of Things J., vol. 6, no. 2, pp. 2061-2073, Oct. 2018.
[12] T. Yang, Y. Hu, M. C. Gursoy, A. Schmeink, and R. Mathar, "Deep reinforcement learning based resource allocation in low latency edge computing networks," in Proc. IEEE 15th Int. Symp. on Wireless Communication Systems, ISWCS’18, , 5 pp., Lisbon, Portugal, 28-31 Aug. 2018.
[13] Y. Wang, K. Wang, H. Huang, T. Miyazaki, and S. Guo, "Traffic and computation co-offloading with reinforcement learning in fog computing for industrial applications," IEEE Trans. on Industrial Informatics, vol. 15, no. 2, pp. 976-986, Nov. 2018.
[14] S. Ravichandiran, Hands-on Reinforcement Learning with Python: Master Reinforcement and Deep Reinforcement Learning Using OpenAI Gym and TensorFlow, Packt Publishing Ltd, p. 303, 2018.
[15] M. H. Moghadam and S. M. Babamir, "Makespan reduction for dynamic workloads in cluster-based data grids using reinforcement-learning based scheduling," J. of Computational Science, vol. 24, pp. 402-412, Jan. 2018.
[16] A. I. Orhean, F. Pop, and I. Raicu, "New scheduling approach using reinforcement learning for heterogeneous distributed systems," J. of Parallel and Distributed Computing, vol. 117, pp. 292-302, Jul. 2018.
[17] Z. Peng, D. Cui, J. Zuo, Q. Li, B. Xu, and W. Lin, "Random task scheduling scheme based on reinforcement learning in cloud computing," Cluster Computing, vol. 18, no. 4, pp. 1595-1607, Dec. 2015.
[18] G. Qiao, S. Leng, and Y. Zhang, "Online learning and optimization for computation offloading in d2d edge computing and networks," Mobile Networks and Applications, 12 pp., Jan. 2019.
[19] Q. Qi, J. Wang, Z. Ma, H. Sun, Y. Cao, L. Zhang, and J. Liao, "Knowledge-driven service offloading decision for vehicular edge computing: a deep reinforcement learning approach," IEEE Trans. on Vehicular Technology, vol. 68, no. 5, pp. 4192-4203, Jan. 2019.
[20] Y. Sun, M. Peng, and S. Mao, "Deep reinforcement learning based mode selection and resource management for green fog radio access networks," IEEE Internet of Things J., vol. 6, no. 2, pp. 1960-1971, Sept. 2018.
[21] L. Yin, J. Luo, and H. Luo, "Tasks scheduling and resource allocation in fog computing based on containers for smart manufacturing," IEEE Trans. on Industrial Informatics, vol. 14, no. 10, pp. 4712-4721, Jun. 2018.
[22] A. P. Miettinen and J. K. Nurminen, "Energy efficiency of mobile clients in cloud computing," HotCloud, vol. 10, pp. 4-4, Jun. 2010.
[23] Cisco, IoX App Concepts: Application Resource Profiles, Developer, Cisco, 2019, URL: https://developer.cisco.com/docs/iox/#!application-resource-profiles/resource-profiles
[24] V. Mnih, et al., "Human-level control through deep reinforcement learning," Nature, vol. 518, no. 7540, pp. 529-533, Feb. 2015.
[25] A. Yousefpour, G. Ishigaki, R. Gour, and J. P. Jue, "On reducing IoT service delay via fog offloading," IEEE Internet of Things J., vol. 5, no. 2, pp. 998-1010, Jan. 2018.
[26] T. SimPy, Simpy: Discrete Event Simulation for Python, Python Package Version, vol. 3, no. 9, 2017. URL: https://simpy.readthedocs.io/en/latest/
[27] F. Chollet, et al., Keras: The Python Deep Learning Library, Astrophysics Source Code Library, 2018.
[28] Cisco, Cisco 4000 Family Integrated Services Router, Cisco Datasheet, 2018.
[29] Cisco, Cisco UCS C220 M4 Rack Server, Cisco Datasheet, 2018.
[30] Cisco, Cisco UCS C220 M5 Rack Server, Cisco Datasheet, 2019.
[31] Cisco, Cisco UCS C240 M4 Rack Server, Cisco Datasheet, 2016.
[32] Cisco, Cisco UCS C240 M5 Rack Server, Cisco Datasheet, 2019.
[33] S. Sendra, M. Garcia Pineda, C. Turro Ribalta, and J. Lloret, "Wlan ieee 802.11 a/b/g/n indoor coverage and interference performance study," International J. on Advances in Networks and Services, vol. 4, no. 1, pp. 209-222, 2011.
[34] C. Zhang, O. Vinyals, R. Munos, and S. Bengio, A Study on Overfitting in Deep Reinforcement Learning, arXiv preprint arXiv:1804.06893, 2018.
[35] R. S. Sutton and A. G. Barto, Reinforcement Learning: An Introduction, MIT Press, 2018.
[36] J. Zhu, et al., "A new deep-Q-learning-based transmission scheduling mechanism for the cognitive internet of Things," IEEE Internet of Things J., vol. 5, no. 4, pp. 2375-2385, Aug. 2017.
[37] O. Skarlat, et al. "Optimized IoT service placement in the fog," Service Oriented Computing and Applications, vol. 11, no. 4, pp. 427-443, Dec. 2017.
[38] E. Akhtar and S. Farrukh, Practical Reinforcement Learning: Develop self-evolving, Intelligent Agents with OpenAI Gym, Python and Java, Packt Publishing, p. 146, 2017.
[39] A. Kapsalis, P. Kasnesis, I. S. Venieris, D. I. Kaklamani, and C. Z. Patrikakis, "A cooperative fog approach for effective workload balancing," IEEE Cloud Computing, vol. 4, no. 2, pp. 36-45, Apr. 2017.